![]() METHOD AND APPARATUS FOR PROCESSING ACQUIRED IMAGES USING A SINGLE IMAGE SENSOR
专利摘要:
Efficient method and system for acquiring scene images and iris images using a single sensor the present disclosure relates to methods and systems for capturing images of an iris and a scene using a single image sensor. an image sensor can capture a view of a scene and a view of an iris in at least one image. An image processing module can apply a level of noise reduction to a first part of the at least one image to produce an image of the scene. the image processing module can apply a reduced level of noise reduction to a second part of the at least one image to produce an iris image for use in biometric identification. 公开号:BR112013021160B1 申请号:R112013021160-1 申请日:2012-02-16 公开日:2021-06-22 发明作者:Keith J. Hanna 申请人:Eyelock Llc; IPC主号:
专利说明:
RELATED ORDER This application is a continuation and priority benefit of US Patent Application 13/398562, filed February 16, 2012, entitled "Efficient Method And System For The Acquisition Of Scene Imagery And Iris Imagery Using A Single Sensor", which in turn claims the priority benefit of Provisional Patent Application US 61/443757, filed February 17, 2011, entitled “Method And System For Iris Recognition And Face Acquisition” and Provisional Patent Application US 61/472279, filed April 6, 2011, entitled "Efficient Method And System For The Acquisition Of Face And Iris Imagery", all incorporated herein by reference in their entirety for all purposes. REVELATION FIELD The present invention relates to image processing technologies, and more specifically to systems and methods for acquiring scene images and iris images using a single sensor. BACKGROUND Typically, biometric systems are designed to obtain optimal images by considering constraints specific to the type of biometric in question. If other data is to be obtained (eg face or background image), then typically different sensors have to be used as requirements for different types of images are very different. However, such an approach adds cost to the total solution and can also increase the size or footprint of the system. Adam et al., patent publication US 20060050933, aim to address the problem of obtaining data for use in face and iris recognition using a sensor, but do not address the problem of optimizing image acquisition in such a way that the obtained data is optimal for each of the face and iris recognition components separately. Determan et al., US Patent Publication 20080075334, and Saitoh et al., US Patent Publication 20050270386, disclose obtaining face and iris images for recognition using a separate sensor for the face and a separate sensor for the iris. Saitoh describes a method for performing iris recognition that includes identifying the position of the iris using a face and iris image, but uses two separate sensors that separately focus on the face and iris, respectively, and obtain data simultaneously in such a way that movement of user is not a concern. Determan et al., patent publication US 20080075334, also discuss using a sensor for both the face and the iris, but do not address the problems of optimizing image acquisition in such a way that the data obtained is optimal for each of the recognition components. of face and iris separately. Jacobson et al., in patent publication US 20070206840, also describe a system that includes obtaining images of the face and iris, but do not address the problem of optimizing image acquisition in such a way that the data obtained is optimal for each of the components. of face and iris recognition separately, and do not address how a reduced size system can be achieved. SUMMARY In certain aspects, methods and systems for obtaining a high quality image of an iris for biometric identification, and a high quality image of any other view, such as a person's face, with a single sensor are described in this document. Modalities of these systems and methods can be employed in such a way that an image is obtained with a single sensor for the purpose of determining or verifying the identity of a person using biometric recognition using the iris, as well as for the purpose of obtaining general images of such scenarios. like faces and locations. Images of the latter type can typically be taken by a mobile phone user, for example. As such, the disclosed methods and systems can be incorporated into mobile and/or compact devices. The sensor can be a Complementary Metal Oxide Semiconductor (CMOS) sensor or another suitable type of image capture device. Methods and systems can configure or adjust conditions to be close to ideal in two acquisition modes, for example, an iris image acquisition mode and an image acquisition mode (eg, non-iris). In some embodiments, systems for obtaining such images, embedded in a device, for example, may have a significantly reduced physical size or footprint compared to devices using multiple sensors, for example. In one aspect, development describes a method of capturing images of an iris and a scene using a single image sensor. The method may include capturing, by an image sensor, a view of a scene and a view of an iris in at least one image. An image processing module can apply a noise reduction level to a first part of the at least one image to produce an image of the scene. The image processing module can apply a reduced level of noise reduction to a second portion of the at least one image to produce an iris image for use in biometric identification. In some modalities, the image sensor can capture the scene view and the iris view as separable components within a single image. The image sensor can capture at least one image of the iris while illuminating the iris with infrared illumination. In certain embodiments, the image sensor can activate a plurality of sensor nodes of the image sensor. A first subset of the sensor nodes can be adapted primarily to capture an image of the iris suitable for biometric identification. A second subset of the sensor nodes can be adapted primarily to capture a non-iris image. In certain embodiments, the image processing module can apply noise reduction comprising an averaging or median calculation function. The image processing module can apply noise reduction comprising reducing both time-varying and non-time-varying noise to a captured image. The image processing module can subtract noise from one iris image with noise from another iris image. In certain embodiments, the image processing module can reduce ambient noise in one image by using ambient noise from another image. The image processing module can reduce ambient noise from an image captured in the presence of infrared illumination by using ambient noise from another image captured without infrared illumination. The image processing module can perform gain or brightness control on the second part of the at least one image to produce the iris image for use in biometric identification. In another aspect, the disclosure describes an apparatus for capturing images of an iris and a scene using a single image sensor. The apparatus may include an image sensor and an image processing module. The image sensor can capture one view of a scene and one view of an iris in at least one image. The image processing module can apply a noise reduction level to a first part of the at least one image to produce an image of the scene. The image processing module can apply a reduced level of noise reduction to a second portion of the at least one image to produce an iris image for use in biometric identification. In some modalities, the image sensor captures the scene view and the iris view as separable components within a single image. The image sensor may comprise a Complementary Metal Oxide Semiconductor (CMOS) sensor, for example. The image sensor may include a plurality of sensor nodes, a first subset of the sensor nodes primarily adapted to capture an iris image suitable for biometric identification, a second subset of the sensor nodes primarily adapted to capture a non-iris image. In certain embodiments, the apparatus includes an illuminator for illuminating the iris with infrared illumination, wherein the image sensor captures at least one image of the illuminated iris. In some embodiments, the noise reduction performed includes applying an averaging or median function to a captured image. Noise reduction can include both time-varying and non-time-varying noise reduction of a captured image. In certain embodiments, the image processing module subtracts noise from one iris image with noise from another iris image. The image processing module can reduce ambient noise from an image captured in the presence of infrared illumination by using ambient noise from another image captured without infrared illumination. In some embodiments, the image processing module performs gain or brightness control on the second part of the at least one image to produce the iris image for use in biometric identification. Certain modalities of the methods and systems disclosed in this document can address various challenges to obtaining high quality images of a scene as well as high quality iris images with a single sensor. For example, a challenge possibly unexpectedly concerns managing the sensor's noise properties. It has been found that image quality requirements for iris recognition and scene patterns sometimes conflict with respect to noise. Noise can be a big concern as image pixel sizes are becoming smaller and smaller, and so fundamental noise levels at each pixel are increasing or becoming more pronounced. We have determined that certain types of noise actually may be preferable or tolerable for iris recognition, for example, when compared to the quality of iris images obtained in standard images adopting modes that incorporate noise reduction. As such, we may prefer, perhaps counterintuitively, to retain noise in processed images during iris imaging in order to improve iris identification performance when compared to images that have undergone typical noise reduction. Another challenge concerns the wavelength of illumination required for standard images and for iris images. Iris imaging typically uses infrared lighting, whereas standard imaging typically relies on visible lighting. These can be seen as conflicting constraints if incorporated into a single system for acquiring both types of images. This disclosure outlines several approaches to resolving this. For example, and in one modality, different filters can be interleaved in front of the sensor. Filters can have different responses to infrared and visible responses. RGB filters (red, green, blue) and filter patterns can be adapted for use in different modalities. For example, and in certain embodiments, systems and methods can intersperse filters that pass infrared with other filters that are primarily for passing color images. Examples of this approach are in US patent publication 2007/0145273 and US patent publication 2007/0024931. An improvement on these approaches includes using an R,G,(G+I),B interleaved array (where I represents Infrared). Such an approach may have the advantage of maintaining or recovering full resolution of the G signal (green) for which the human visual system is more sensitive. Another modality of methods and systems addresses this challenge by using a removable or retractable IR cut filter that can be positioned automatically or manually in front of the sensor during standard image acquisition mode. In yet another modality, systems and methods can superimpose an IR-cut filter only to a portion of the image sensor that is dedicated for iris recognition. System modalities and methods described in this document can address a third challenge, which concerns image deterioration by ambient lighting. In some modalities where filtering or infrared illumination is not optimal, images of a surrounding scene can be seen reflected in the corneal or eye surface during iris imaging. This can sometimes seriously impact iris recognition performance. Modalities of the systems and methods described in this document can take at least two images. One of the images can be captured with controlled infrared lighting turned on, and at least one second image captured with controlled infrared lighting turned off. An image processing module can process these at least two images to reduce or remove artifacts. By way of illustration, the image processing module can align the images and then subtract the images from each other to remove artifacts. Since the illumination of artifacts or components is essentially unchanged between the two images, whereas the iris texture is illuminated by infrared lighting and exposed in an image, a difference in the images can remove artifacts while maintaining the iris texture. Methods and systems can overcome sensor nonlinearities by identifying pixels that are close to or in the sensor's nonlinear operating range (eg, saturated or dark) and eliminate them from subsequent iris recognition processing as the process Image subtraction in these regions might be non-linear and artifacts could still remain. In another modality of methods, we can manage image deterioration by exploiting particular geometric constraints of the user's position, a device, and the source of the deterioration. In certain modalities, we can take advantage of the fact that the user can hold the device in front of their face during iris image acquisition mode, thereby reducing or even blocking the ambient light source from deteriorating in one sector of the iris images. obtained. Methods and systems, for example, can limit iris recognition to this sector, thus avoiding problems related to image deterioration. BRIEF DESCRIPTION OF THE DRAWINGS The following figures represent certain illustrative embodiments of the methods and systems described in this document, where like reference numerals refer to like elements. Each represented modality is illustrative of these methods and systems and not limiting. Figure 1A is a block diagram illustrating a modality of a network environment with a client machine that communicates with a server; Figures 1B and 1C are illustrative block diagrams of embodiments of computing machines for practicing the methods and systems described in this document; Figure 2 represents an embodiment of an image intensity profile corresponding to a part of an image; Figure 3A represents an image intensities profile of a non-systematic noise modality; Figure 3B represents an image intensities profile of a systematic noise modality; Figure 4 represents an image intensities profile of a systematic noise modality; Figure 5 represents an image intensities profile of a sporadic noise modality; Figure 6 represents an embodiment of an image intensity profile corresponding to a part of an image that has undergone noise reduction; Figure 7 is a diagram of an embodiment of a face view image including iris texture; Figure 8 represents an embodiment of an image intensity profile representing iris texture; Figure 9 represents an embodiment of an image intensity profile representing iris texture after noise reduction; Figure 10 represents an embodiment of an image intensity profile representing iris texture and noise; Figure 11 represents an embodiment of a system for acquiring scene images and iris images using a single sensor; Figure 12 represents a table showing the effect of noise on images obtained; Figure 13 represents another embodiment of a system for acquiring scene images and iris images using a single sensor; Figure 14 represents an embodiment of a system for acquiring face images and iris images using a single sensor; Figure 15 represents a response profile based on a double bandpass filter; Figure 16 represents an embodiment of an interleaved filter configuration; Figure 17 represents an image modality with artifacts reflected on an eye surface; Figure 18 represents a modality of an image with textured iris and artifacts reflected in an eye surface; Figure 19 also represents another embodiment of a system for acquiring face images and iris images using a single sensor; Figure 20 represents an embodiment of an image showing iris texture with artifacts removed; Figure 21 represents a scenario for acquiring face and iris images; Figure 22 represents another modality of an image with textured iris and artifacts reflected in an eye surface; Figure 23 represents yet another embodiment of a system for acquiring face images and iris images using a single sensor; Figure 24 represents another embodiment of a system for acquiring face images and iris images using a single sensor; Figure 25 represents an embodiment of a system for acquiring face images and iris images using a single sensor and a mirror; Fig. 26 represents an embodiment of a method for acquiring face images and iris images using a single sensor and a mirror; Figure 27 represents an ocular dominance effect on face imaging and iris imaging; Fig. 28 represents another embodiment of a system for acquiring face images and iris images using a single sensor and a mirror; Figures 29 and 30 represent the effect of ocular dominance in the acquisition of face images and iris images; Figure 31 also represents another embodiment of a system for acquiring face images and iris images using a single sensor and a mirror; Figure 32 represents modalities of sensor and mirror configurations; Fig. 33 represents another embodiment of a system for acquiring face images and iris images using a single sensor and a mirror; Figure 34 represents yet another embodiment of a system for acquiring face images and iris images using a single sensor and a mirror; Figure 35 represents another embodiment of a system for acquiring face images and iris images using a single sensor; Fig. 36 also represents another embodiment of a system for acquiring face images and iris images using a single sensor; Fig. 37 represents yet another embodiment of a system for acquiring face images and iris images using a single sensor; and Fig. 38 is a flowchart illustrating an embodiment of a method for acquiring scene images and iris images using a single sensor. DETAILED DESCRIPTION Before discussing other aspects of systems and methods for acquiring scene images and iris images using a single sensor, a description of components and system features suitable for use in the present systems and methods may be helpful. Figure 1A illustrates an embodiment of a computing environment 101 that includes one or more client machines 102A-102N (generally referred to herein as "client machine(s) 102") in communication with one or more servers 106A-106N (generally referred to herein as “106 server(s)”). Installed between the client machine(s) 102 and the server(s) 106 is a network. In one embodiment, the computing environment 101 can include a device installed between the server(s) 106 and the client machine(s) 102. This device can manage client/server connections, and in some cases can perform load balancing of client connections between a plurality of backend servers. The client machine(s) 102 in some modality may be referred to as a single client machine 102 or a single group of client machines 102, while the server(s) 106 may be referred to as a single server 106 or a single group of servers 106. In one embodiment a single client machine 102 communicates with more than one server 106, while in another embodiment a single server 106 communicates with more than one client machine 102. Also in another modality, a single client machine 102 communicates with a single server 106. A client machine 102, in some embodiments, may be referred to by any of the following terms: the client machine(s) 102; customers); client computer(s); client device(s); client computing device(s); local machine; remote machine; we customer(s); end point(s); endpoint(s) node(s); or a second machine. Server 106, in some embodiments, may be referred to by any of the following terms: server(s), local machine; remote machine; set(s) of servers, host computing device(s) or a first machine. The client machine 102 in some embodiments may run, operate or otherwise provide an application which may be any of the following: software; a program; executable instructions; a virtual machine; a hypervisor; a web browser; a network-based client; a client-server application; a thin client computing client; an ActiveX control; a Java applet; software relating to voice over internet protocol (VoIP) communications such as an IP soft phone; an application for streaming video and/or audio; an application to promote real-time data communications; an HTTP client; an FTP client; an Oscar client; a Telnet client; or any other set of executable instructions. Still other embodiments include a client device 102 that displays application output generated by an application running remotely on a server 106 or on another remotely located machine. In these embodiments, the client device 102 can display the application output in an application window, a browser, or in another output window. In one modality, the application is a workspace, while in other modality the application is an application that generates a workspace. Computing environment 101 may include more than one server 106A-106N such that servers 106A-106N are logically grouped into a set of servers 106. The set of servers 106 may include servers 106 that are geographically dispersed and logically grouped into a set of servers 106. a set of servers 106, or servers 106 that are located close to each other and logically grouped into a set of servers 106. Geographically dispersed servers 106A-106N within a set of servers 106, in some embodiments, may communicate using a WAN, MAN or LAN, where different geographic regions can be characterized as: different continents; different regions of a continent; different countries; different states; different cities; different campuses; different rooms; or any combination of the preceding geographic locations. In some embodiments the set of servers 106 may be administered as a single entity, while in other embodiments the set of servers 106 may include multiple sets of servers 106. In some embodiments, a set of servers 106 may include servers 106 that run on a substantially similar type of operating system platform (e.g., WINDOWS NT, manufactured by Microsoft Corp. of Redmond, Washington, UNIX, LINUX or SNOW LEOPARD) . In other embodiments, the set of servers 106 may include a first group of servers 106 running on a first type of operating system platform, and a second group of servers 106 running on a second type of operating system platform. The set of servers 106, in other embodiments, may include servers 106 that run on different types of operating system platforms. Server 106, in some embodiments, can be any type of server. In other embodiments, server 106 can be any of the following types of servers: a file server; an application server; a network server; a proxy server; an apparatus; a network device; a communication port; an application communication port; a gateway server; a virtualization server; a deployment server; an SSL VPN server; a protective barrier; a network server; an application server or as a master application server; a server 106 running a working directory; or a server 106 running an application accelerating program that provides protection barrier functionality, application functionality, or load balancing functionality. In some embodiments, a server 106 can be a RADIUS server that includes a remote authentication dial-in user service. Some embodiments include a first server 106A that receives requests from a client machine 102, sends the request to a second server 106B, and responds to the request generated by the client machine 102 with a response from the second server 106B. First server 106A may obtain a list of applications available to client machine 102, as well as address information associated with an application server 106 hosting an application identified within the list of applications. The first server 106A can then present a response to the client's request using a network interface, and communicate directly with the client 102 to provide the client 102 with access to an identified application. Client machines 102, in some embodiments, may be a client node that seeks access to resources provided by a server 106. In other embodiments, server 106 may provide clients 102 or client nodes with access to hosted resources. Server 106, in some embodiments, functions as a master node such that it communicates with one or more clients 102 or servers 106. In some embodiments, the master node may identify and provide address information associated with a server 106 hosting an on-demand application, to one or more clients 102 or servers 106. In yet other embodiments, the master node may be a set of servers 106, a client 102, a grouping of client nodes 102, or an appliance. One or more clients 102 and/or one or more servers 106 may transmit data via a network 104 installed between machines and apparatus within the computing environment 101. The network 104 may comprise one or more subnets, and may be installed between any combination of clients 102, servers 106, computing machines, and appliances included in computing environment 101. In some embodiments, network 104 can be: a local area network (LAN); a metropolitan area network (MAN); a wide area network (WAN); a primary network 104 comprised of multiple subnets 104 located between client machines 102 and servers 106; a public primary network 104 with a private subnet 104; a primary private network 104 with a public subnet 104; or a primary private network 104 with a private subnet 104. Further embodiments include a network 104 which can be any of the following types of networks: a peer-to-peer network; a broadcast network; a telecommunications network; a data communication network; a computer network; an ATM (Asynchronous Transfer Mode) network; a SONET network (Synchronous Optical Network); an SDH (Synchronous Digital Hierarchy) network; a wireless network; a wired network; or a network 104 that includes a wireless link where the wireless link may be an infrared channel or satellite band. The network topology of network 104 may differ within different embodiments, and possible network topologies include: a bus network topology; a star network topology; a ring network topology; a repeater-based network topology; or a stacked star network topology. Additional embodiments may include a network 104 of mobile phone networks that use a protocol for communication between mobile devices, where the protocol may be any of the following: AMPS; TDMA; CDMA; GSM; UMTS GPRS; 3G; 4G; or any other protocol capable of transmitting data between mobile devices. Illustrated in Figure 1B is an embodiment of a computing device 100, where the client machine 102 and server 106 illustrated in Figure 1A can be implemented and/or executed in any embodiment of the computing device 100 illustrated and described in this document. Included in computing device 100 is a system bus 150 which communicates with the following components: a central processing unit 121; a main memory 122; storage memory 128; an input/output (I/O) controller 123; the 124A-124N display devices; an installation device 116; and a network interface 118. In one embodiment, storage memory 128 includes: an operating system, software routines, and a client agent 120. Input/output controller 123, in some embodiments, is additionally connected to a keyboard 126 and an indicating device 127. Other embodiments may include an input/output controller 123 connected to more than one input/output device 130A-130N. Figure 1C illustrates an embodiment of a computing device 100, where the client machine 102 and server 106 illustrated in Figure 1A can be implemented and/or executed in any embodiment of the computing device 100 illustrated and described in this document. Included in computing device 100 is a system bus 150 that communicates with the following components: a bridge 170 and a first input/output device 130A. In another embodiment, the bridge 170 is in further communication with the central processing unit 121, where the central processing unit 121 can further communicate with a second input/output device 130B, a main memory 122 and a memory cache 140. Included in the central processing unit are 121 input/output ports, a memory port 103 and a main processor. Embodiments of computing machine 100 may include a central processing unit 121 characterized by any of the following component configurations: logic circuits that respond to and process instructions retrieved from main memory unit 122; a microprocessor unit, such as those manufactured by Intel Corporation; those manufactured by Motorola Corporation; those manufactured by Transmeta Corporation of Santa Clara, California; the RS/6000 processor such as those manufactured by International Business Macines; a processor such as those manufactured by Advanced Micro Devices; or any other combination of logic circuits. Further embodiments of central processing unit 122 may include any combination of the following: a microprocessor, a microcontroller, a central processing unit with a single processing core, a central processing unit with two processing cores, or a central processing unit with two processing cores. processing with more than one processing core. Although Figure 1C illustrates a computing device 100 that includes a single central processing unit 121, in some embodiments, computing device 100 may include one or more processing units 121. In these embodiments, computing device 100 can store and execute firmware or other executable instructions which, when executed, direct one or more processing units 121 to concurrently execute instructions or to concurrently execute instructions on a single piece of data. In other embodiments, computing device 100 may store and execute firmware or other executable instructions which, when executed, direct one or more processing units to each execute a section of a group of instructions. For example, each processing unit 121 can be instructed to execute a part of a program or a particular module within a program. In some embodiments, the processing unit 121 may include one or more processing cores. For example, the processing unit 121 may have two cores, four cores, eight cores, etc. In one embodiment, the processing unit 121 may comprise one or more parallel processing cores. The processing cores of processing unit 121 in some embodiments may access memory available as a global address space or, in other embodiments, memory within computing device 100 may be segmented and assigned to a particular core within processing unit 121 In one embodiment, each of the one or more processing cores or processors in computing device 100 can access local memory. In yet another embodiment, memory within computing device 100 may be shared among one or more processors or processing cores, while other memory may be accessed by particular processors or subsets of processors. In embodiments where computing device 100 includes more than one processing unit, multiple processing units may be included on a single integrated circuit (IC). These multiple processors, in some embodiments, can be linked together by an internal high-speed bus, which may be referred to as an element-interconnecting bus. In embodiments where computing device 100 includes one or more processing units 121, or a processing unit 121 including one or more processing cores, the processors may execute a single instruction simultaneously on multiple pieces of data (SIMD), or in other modalities can execute multiple instructions simultaneously on multiple pieces of data (MIMD). In some embodiments, computing device 100 can include any number of SIMD and MIMD processors. Computing device 100, in some embodiments, may include an image processor, a graphics processor, or a graphics processing unit. The graphics processing unit can include any combination of software and hardware, and can additionally input graphics data and graphics instructions, render a graphic from the input data and instructions, and produce the rendered graphic. In some embodiments, the graphics processing unit may be included in the processing unit 121. In other embodiments, the computing device 100 may include one or more processing units 121, where at least one processing unit 121 is dedicated to processing and render graphics. One embodiment of computing machine 100 includes a central processing unit 121 that communicates with cache memory 140 via a secondary bus also known as a back bus, while another embodiment of computing machine 100 includes a central processing unit. processing 121 that communicates with cache memory via system bus 150. Local system bus 150, in some embodiments, may also be used by the central processing unit to communicate with more than one type of input/device 130A-130N output. In some embodiments, the local system bus 150 can be any of the following types of buses: a VESA VL bus; an ISA bus; an EISA bus; a Microchannel Architecture (MCA) bus; a PCI bus; a PCI-X bus; a PCI-Express bus; or a NuBus. Other embodiments of computing machine 100 include an input/output device 130A-130N which is a video display 124 that communicates with central processing unit 121. Other versions of computing machine 100 further include a processor 121 connected to a 130A-130N input/output device via any of the following connections: HyperTransport, Rapid I/O, or InfiniBand. Additional embodiments of computing machine 100 include a processor 121 that communicates with an input/output device 130A using a local interconnect bus and a second input/output device 130B using a direct connection. Computing device 100, in some embodiments, includes a main memory unit 122 and cache memory 140. Cache memory 140 can be any type of memory, and in some embodiments it can be any of the following types of memory: SRAM; BSRAM; or EDRAM. Other embodiments include cache memory 140 and a main memory unit 122 which can be any of the following types of memory: Static Random Access Memory (SRAM), Burst SRAM or SynchBurst SRAM (BSRAM); Dynamic random access memory (DRAM); Fast Page Mode DRAM (FPM DRAM); Enhanced DRAM (EDRAM), Extended Data Output RAM (EDO RAM); Extended Data Output DRAM (EDO DRAM); Burst Extended Data Output DRAM (BEDO DRAM); Enhanced DRAM (EDRAM); Synchronous DRAM (SDRAM); JEDEC SRAM; PC 100 SDRAM; Dual Data Rate SDRAM (DDR SDRAM); Enhanced SDRAM (ESDRAM); SyncLink DRAM (SLDRAM); Direct Rambus DRAM (DRDRAM); RAM Ferroelectric (FRAM); or any other type of memory. Additional embodiments include a central processing unit 121 which can access main memory 122 via: a system bus 150; a memory port 103; or any other connection, bus, or port that allows processor 121 to access memory 122. One embodiment of computing device 100 provides support for any of the following installation devices 116: a CD-ROM drive, a CD-R/RW drive, a DVD-ROM drive, tape drives of various formats, device USB, a bootable media, a bootable CD, a bootable CD for GNU/Linux distribution such as KNOPPIX®, a hard disk drive or any other device suitable for installing applications or software. Applications in some embodiments may include a client agent 120, or any part of a client agent 120. Computing device 100 may additionally include a storage device 128 which may be one or more hard disk drives, or one or more redundant sets of independent disks; where the storage device is configured to store an operating system, software, program applications, or at least a portion of the client agent 120. A further embodiment of the computing device 100 includes an installation device 116 which is used as the device. of storage 128. Computing device 100 may additionally include a network interface 118 for interfacing to a Local Area Network (LAN), Extended Area Network (WAN) or the Internet through a variety of connections including, but not limited to this, standard telephone lines, LAN or WAN links (eg 802.11, T1, T3, 56kb, X.25, SNA, DECNET), broadband connections (eg ISDN, Frame Switching, ATM , Gigabit Ethernet, Ethernet-over-SONET), wireless connections, or some combination of any or all of the above. Connections can also be established using a variety of communication protocols (eg TCP/IP, IPX, SPX, NetBIOS, Ethernet, ARCNET, SONET, SDH, Fiber Distributed Data Interface (FDDI), RS232, RS485, IEEE 802.11 , IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, CDMA, GSM, WiMax and direct asynchronous connections). One version of computing device 100 includes a network interface 118 capable of communicating with additional computing devices 100' via any type and/or form of communication port or tunneling protocol such as the Security Socket Layer ( SSL) or Transport Layer Security (TLS), or the Citrix Communication Port Protocol manufactured by Citrix Systems, Inc. Network interface versions 118 may comprise any of: an embedded network adapter; a network interface card; a PCMCIA network card; a card-bus network adapter; a wireless network adapter; a USB network adapter; a modem; or any other device suitable for interfacing computing device 100 to a network capable of communicating and performing the methods and systems described in this document. Modalities of computing device 100 include any of the following input/output devices 130A-130N: a keyboard 126; an indicating device 127; mice; touch sensitive locations; a light pen; stationary mice; microphones; drawing tablets; video displays; loudspeakers; inkjet printers; laser printers; and dye sublimation printers; or any other input/output device capable of performing the methods and systems described in this document. An input/output controller 123 in some embodiments can connect to multiple input/output devices 103A-130N to control the one or more input/output devices. Some modalities of the 130A-130N input/output devices can be configured to provide storage or installation media 116, while others can provide a universal serial bus (USB) interface for receiving USB storage devices such as the USB Flash line Device drive manufactured by Twintech Industry, Inc. Still other embodiments include an input/output device 130 that can be a bridge between the system bus 150 and an external communication bus, such as: a USB bus; an Apple Desktop Bus; an RS-232 serial connection; a SCSI bus; a FireWire bus; a FireWire 800 bus; an Ethernet bus; an AppleTalk bus; a Gigabit Ethernet bus; an Asynchronous Transfer Mode bus; a HIPPI bus; a Super HIPPI bus; a SerialPlus bus; an SCI/LAMP bus; a FiberChannel bus; or a Serial Attached Small Computer System Interface Bus. In some embodiments, computing machine 100 can run any operating system, while in other embodiments, computing machine 100 can run any of the following operating systems: versions of the MICROSOFT WINDOWS operating systems; the different releases of Unix and Linux operating systems; any version of MAC OS manufactured by Apple Computer; OS/2, manufactured by International Business Machines; Android from Google; any built-in operating system; any real-time operating system; any open source operating system; any proprietary operating system; any operating systems for mobile computing devices; or any other operating system. In yet another embodiment, computing machine 100 can perform multiple operating systems. For example, computing machine 100 may run PARALLELS or another virtualization platform that can run or manage a virtual machine running a first operating system, while computing machine 100 runs a second operating system different from the first operating system. operation. Computing machine 100 may be incorporated into any of the following computing devices: a computing workstation; a desktop computer; a laptop or notebook; a server; a handheld computer; a mobile phone; a portable telecommunication device; a media playback device; a computer game system; a mobile computing device; a netbook, a tablet; a device from the IPOD or iPad family of devices manufactured by Apple Computer; any of the PLAYSTATION family of devices manufactured by Sony Corporation; any of the Nintendo family of devices manufactured by Nintendo Co; any of the XBOX family of devices manufactured by Microsoft Corporation; or any other type and/or form of computing, telecommunications, or media device that is capable of communication and that has sufficient processor power and memory capacity to perform the methods and systems described in this document. In other embodiments, computing machine 100 may be a mobile device such as any of the following mobile devices: a JAVA-enabled cellular phone or personal digital assistant (PDA); any computing device that has different processors, operating systems, and input devices consistent with the device; or any other mobile computing device capable of performing the methods and systems described in this document. In yet other embodiments, computing device 100 may be any of the following mobile computing devices: any of the Blackberry series, or other handheld device manufactured by Research In Motion Limited; the iPhone made by Apple Computer; Palm Pre; a Pocket PC; a Pocket PC Phone; an Android phone; or any other handheld mobile device. Having described certain system components and features that may be suitable for use in the present systems and methods, additional aspects are discussed below. Figure 2 represents an illustrative image of a typical scene or object (eg house) taken by a typical image sensor. An image sensor may include, for example, a digital charge-coupled device (CCD) or complementary metal oxide semiconductor active pixel (CMOS) sensor, although not limited thereto. The graph or profile of intensities corresponding to the image shows, for a cross-sectional region indicated by line P2, the intensity value I of pixels on the vertical axis and the corresponding spatial position X. Light and dark points in the intensity profile correspond to light points and dark in the image as shown. Typically, there can be substantial noise in the signal, represented by fluctuations in intensity even within evenly lit areas (eg regions corresponding to the door of the house). Noise can be derived from a variety of sources, for example, amplifier noise and shot noise, anisotropic (systematic) noise, and sporadic noise. Shot noise refers to the quantum effect of having a finite number of photons being collected at a particular pixel source in a finite period of time. The smaller the pixel size, the greater the shooting noise can result. This is because there may be few photons from which to infer an incident illumination measurement. As pixel dimensions get smaller, the associated optics focal length for a given image resolution can also drop linearly. This can reduce the thickness of the lens/sensor component combination. However, as requirements for sensor resolution increase, and space constraints for sensors and their associated optics become tighter, sensor and image pixel sizes have to be correspondingly reduced to accommodate the requirements and constraints. A result of the reduction in pixel size is a substantial increase in sensor noise. This type of noise, as well as amplifier noise, can be characterized as being time-varying and not systematic as depicted in Figure 3A. Another type of noise is anisotropic, or systematic/periodic noise. Periodic noise can be caused, for example, by differences in amplifier gains in the image sensor read path. For example, different rows and columns can pass through different amplifiers with slightly different gains. This type of systematic noise is represented in Figure 3B, where an intensity profile that should be uniformly flat is in fact periodically oscillating in one dimension (for example, across an image). Figure 4 represents an example of sporadic noise introduced into an image, which may be evident across multiple images. For example, occasional pixels in a set of sensor nodes may have degraded sensitivity, be non-functional, or have limited or excessive gain, resulting in pixels that are brighter or darker as shown. Problems arising from noise are typically addressed when performing noise reduction on an image processing module 220. The image processing module 220 can employ any type of spatial median filtering or region selective averaging, as depicted in the figure. 5. There are many methods to perform noise reduction, and we have identified median filtering and region selective averaging for illustration only. Figure 6 represents a profile of intensities that can result from noise reduction. Although noise reduction may have essentially removed the noise, the image processing module 220 retained features (eg, bright and dark spots) corresponding to actual objects and edges in the scene. From a user perspective, image quality is typically unacceptable in Figure 1 (eg, noisy), whereas in Figure 6 it is considered to be of better quality. Figure 7 represents an image of an iris I1 and a face F1. The image can be obtained using an ideal iris image acquisition system, for example, in accordance with specifications outlined in the National Institute of Standards and Technology (NIST) standards. These specifications may include what is described in ANSI/INCITS 379-2004, Iris Image Interchange Format. Referring to Figure 7, the iris texture is represented by the lines within the circular region indicated by I1. Figure 8 shows a representation of the iris texture intensity profile. In some embodiments, the similarity between Figure 8 (Iris Texture Pattern Intensities Profile) and Figure 2 (Noise Signal Strengths Profile) may be very apparent. One reason for such similarity is that the source of each signal/pattern is characterized by a random process. In the case of the iris, the signal is created by tearing iris tissue before birth, much like the process by which a paper tear is different each time it occurs. In the case of sensor noise, shot noise and other noise are created by random time-varying physical processes. Frequency characteristics of the iris signal “texture” have been distinguished to some degree in NIST standards [ANSI/INCITS 379-2004, Iris Image Interchange Format]; for example, minimum resolution values corresponding to lines/pairs per millimeter (mm) can be assigned for different ranges of iris diameters. The iris diameter may be dependent on a particular optical configuration. By way of illustration, for an iris diameter between 100-149 pixels, the defined pixel resolution can be a minimum of 8.3 pixels per mm, with an optical resolution in modulation of 60% of a minimum of 2.0 line -pairs per mm. For an iris diameter between 150-199 pixels, the defined pixel resolution can be a minimum of 12.5 pixels per mm with an optical resolution at 60% modulation from a minimum of 3.0 line-pairs per mm. For an iris diameter of 200 or more pixels, the defined pixel resolution can be a minimum of 16.7 pixels per mm, with an optical resolution at 60% modulation of a minimum of 4.0 line-pairs per mm. Other combinations of diameter, defined pixel resolution and/or optical resolution may be suitable in certain embodiments. Figure 9 represents a profile of iris texture intensities after going through some of the noise reduction processing described above. In this illustrative case, iris texture was essentially removed by noise reduction. This is because noise reduction algorithms, such as region-specific averaging, may be unable to differentiate between iris texture and noise. As such, noise reduction, which is standard or typical in most image capture devices, can be a limitation when adapted to perform iris recognition. The present systems and methods can address this problem by recognizing particular features related to iris recognition. Figure 10 illustrates, in one embodiment, an optimally obtained iris texture intensities profile (eg, as per NIST standards [ANSI/INCITS 379-2004, Iris Image Interchange Format]), along with a dotted-line sensor noise intensity profile. Certain iris recognition processes involve identifying the lack of statistical independence between an enlisted signal and a probe signal. One significance might be that a match is typically declared by a comparison producing a result that is unlikely to be achieved through a random process. As such, adding significant random and time-varying noise to an original iris signal therefore may: 1) not significantly increase the rate of false matches since false matches result from non-random matching, 2) may have limited impact on the false rejection rate for an individual if the texture of the iris signal generally or essentially exceeds that of the sensor noise (for example, even if the images themselves appear noisy to an observer), and 3) can increase the false rejection rate for the user (with other limited consequences) if the texture of the iris signal has a similar or lower magnitude compared to the magnitude of the sensor noise. Adding systematic noise, however, to the original iris signal, as shown in Figure 3, for example, could trigger a false match because a comparison between two data sets could produce a result that would not have been achieved through a random process. . As such, certain modalities of methods and systems may prefer (eg, counterintuitively) the presence of noise (eg, even significant levels of noise) in a captured iris image, to improve performance in iris identification when compared to images having reduced noise levels (eg through noise reduction). In some embodiments, the present systems can reduce or eliminate the level of unsystematic noise reduction applied to an image when the image is intended for iris recognition. The resulting images can potentially appear extremely noisy to an observer when compared to processed images (eg, with noise reduction applied). However, performance in iris recognition can be significantly improved if a noisy image is used instead for iris recognition. In some particular hardware implementations, noise reduction algorithms are machine-enabled and coded, and cannot be turned off. Some embodiments of the present methods and systems allow control over noise reduction algorithms in order to avoid noise reduction in expected frequency bands for iris texture, as described elsewhere in the descriptive report. Figure 11 represents an exemplary implementation of an approach by which a main processor can control an image signal processor, e.g., a low level image signal processor. In a mode in which iris recognition is performed, a signal can be transmitted to the image signal processor to modify the noise reduction process as described above. Depending on the magnitude of systematic noise, then such noise can be removed (eg using dynamic row calibration whereby pixels on a sensor edge are covered and can be used for sensor calibration) or can be left intact if the magnitude of the noise is substantially less than the iris texture signal magnitude. By way of illustration, Figure 12 shows a table summarizing different scenarios, and describes how different types of noise can affect iris recognition performance and/or the quality of visible images in different image acquisition modes. Another challenge relating to obtaining optimal scene images and standard iris images on the same sensor concerns the wavelength of illumination required for standard images and for iris images. Iris imaging typically requires infrared lighting, whereas standard imaging typically requires visible lighting. Sometimes there are conflicting restrictions. Some embodiments of the present systems can be configured to address this by interleaving filters having different responses to infrared and visible light. These systems can use one of a plurality of different configurations of such filters against an image sensor when capturing an image. An example of a filter that can be incorporated or modified to produce an interleaved filter is one having a standard Bayer RGB (red, green, blue) filter (see, for example, US patent 3,971,065). Filters that (primarily, significantly or only) pass infrared can be interspersed with other filters that (primarily, significantly or only) pass colored or visible light. Some embodiments of filters that provide selected filtering are described in US patent publication 20070145273 and US patent publication 20070024931. Some embodiments of the present system and methods use an interleaved set R,G,(G+I),B in place. Some of these systems have the ability to maintain full (or substantially full) resolution of the G (green) signal to which the human visual system is typically more sensitive. In iris recognition mode, the magnitude of the G (green) response is typically much smaller than that of the infrared response because of incident infrared illumination. In some embodiments, an estimate of the infrared (I) signal response in iris recognition mode can be retrieved by subtracting the (G) signal from the adjacent (G+I) signal. In standard image acquisition mode, the signal R,G,(G+I),B can be processed to retrieve an estimate G’ of G at the pixel at which G+I was retrieved. Various methods can be used to generate such estimates, such as when an array of pixels R,G,T,B is used, where T is fully transparent. the pixel T in such an implementation may contain signals from the R,G,B and I signals accumulated or superimposed together. This can be problematic. If the pixel filter T is truly transparent, then for effective performance the sum of the R,G,B,I responses must still be within the pixel's dynamic range. For a given integration time and pixel area over a whole image, this means that the dynamic range of pixels R,G,B cannot be fully utilized since pixel saturation T (R+G+B+I) may occur. Establishing different pixel areas or gain for pixel T when compared to other pixels R,G,B may be possible, but it can be expensive to implement. One improvement, which can be incorporated into present systems, is to use a neutral density filter in place of the transparent filter. The neutral density filter can reduce the magnitude of illumination of all wavelengths (R,G,B and I) in that pixel, and can allow a full or wide range of pixel capacities to be exploited in R,G, pixels B, thus reducing noise. A neutral density filter with a value of 0.5 to 0.6 can be selected as an example. A green signal typically can contribute approximately 60% of a luminance signal comprised of R,G and B combined together. If a T filter is truly transparent, the sensor's total dynamic range will typically need to be reduced to accommodate the T pixel range and keep it within a linear range, at the cost of the signal-to-noise ratio of R pixels, G,B. By incorporating a set of R,G,G+I,B filters in some modalities of our systems, and since red and blue signals are not present in the G+I pixel, the sensor's total dynamic range can be increased when compared to that of a set R,G,T,B, thus increasing the signal-to-noise ratio. Another approach incorporated in some modalities of our methods and systems to obtain images of standard scenes and ideal iris images on the same sensor, relating to the wavelength of the illumination, involves multiplexing or placing an infrared cut filter over a sensor or standard imaging lens. In one modality, a portion of the sensor (eg 20% of the sensor or sensor nodes) can be designated primarily for iris recognition, while the remaining part (eg 80%) can be used for standard image acquisition. , for example, as shown in figure 14. A smaller part (eg 80%) of the sensor, as in this example, can be covered by a standard IR cut filter. The remaining 20% of the sensor can remain uncovered. In iris recognition mode, the covered region can be ignored. For example, an iris recognition application running on the image capture device can guide the user to position their eyes within the detection region of the 20% uncovered area. Feedback mechanisms can guide the user to shift the image capture device to place the user's irises within an appropriate capture region. For example, since the face will be visible in the remaining 80% of the imager, this can be used for user guidance feedback, optionally with icons appearing in place of the eye region. In some modalities, the image sensor can adjust its orientation to capture an image of the user's iris using the uncovered region. Another approach incorporated in some embodiments of the present systems and methods uses a dual bandpass filter over all or a substantial part of an imager or color sensor. A filter like this can pass either R,G,B signals or infrared signals within selected bands, such as bands around 850 nm or 940 nm, and can produce a frequency response as depicted in Figure 15. Also in a In another embodiment, an image acquisition system can use an IR cut filter that can be positioned automatically or manually or slide into place over at least a portion of an image sensor when the device is in image capture mode. pattern. For example, the IR-cut filter can cover a portion of the image sensor to be aligned with a user's eye to capture iris images. The other parts of the image sensor can capture parts of a user's face, for example. Placement of the IR cut filter can be at one end of the sensor, thus allowing the sensor and correspondingly captured image to have two distinct regions (IR cut and non-IR cut) instead of three or more regions (eg no IR cut, IR cut and not IR cut). This allows a larger and more contiguous non-iris part of a scene (eg face) to be obtained, which in turn can be used for face identification, for example. In some embodiments, a visible light filter or IR pass filter can be placed over the image sensor (eg, optionally) when the device is in iris image capture mode. In some embodiments, the image acquisition system can interleave cut infrared and pass infrared filters through the sensor, for example, as shown in Figure 16. An interleaved filter can be configured in various other modes, such as using a checkbox arrangement, strips of various widths, or other alternating and/or repeatable patterns. In iris recognition mode, the response of pixels/sensor nodes under the IR pass filter bands is used for iris recognition, while the response of pixels/sensor nodes under the IR cut filter bands is used in standard image acquisition mode. In some modalities, both pattern and iris images can be obtained with a single image capture, for example, by separating IR and non-IR image components corresponding to the interleaving pattern. In some modalities, an image taken by an image sensor may be affected or corrupted by ambient lighting. For example, in some modalities where infrared filtering and/or lighting is not ideal, images of a scene may be reflected off a surface of an eye (eg, the cornea) during iris imaging. An example of this is shown in Figure 17. The reflection of images (eg on the cornea of the eye) can be a reflection of a scene comprising houses surrounding the user, as an example. Such reflections can be referred to as artifacts. It was described above how systematic noise can seriously impact iris recognition performance. The artifacts can be overcome using similar methods: obtaining at least two images, one with controlled infrared illumination turned on, as shown in figure 18, and at least one second image with controlled infrared illumination turned off, as shown in figure 17. The module Image processing can process these at least two images to reduce or remove artifacts. For example, and in some embodiments, the image processing module can align the images and then subtract the images from each other, as shown in the processing diagram in Figure 19. Since the illumination of artifacts is essentially unchanged between the two images, while the iris texture is illuminated by infrared illumination, the artifact can be removed by getting a difference, while the iris texture remains. The remaining iris texture is illustrated by the lines within the iris in Figure 20. The system can additionally overcome non-linearities in the sensor, for example, by identifying pixels that are close to or in the sensor's non-linear operating range (eg, saturated or dark). The image processing module can eliminate the identified pixels from subsequent iris recognition processing. Since the image subtraction process in these regions can be non-linear, artifacts can still remain using the subtraction approach. Another modality of the present methods manages image deterioration by exploiting particular geometric constraints of the user's position, the image capture device, and the source of deterioration or artifacts. The image processing module can be configured to recognize that as the user holds the image capture device in front of the user's face during iris imaging mode, the image capture device may reduce or even block sources of ambient lighting deterioration within a sector of the iris images obtained, for example, as shown in figure 21. The image processing module can limit iris recognition primarily or solely to this sector, thus avoiding problems related to deterioration of iris. image, as depicted in Figure 22. In some modalities, iris recognition based on this sector of the image may be more weighted than other sectors when deciding on a biometric match. In some modalities, infrared illumination is not readily available or guaranteed during image capture. Image acquisition system 200 can be configured to control and/or provide infrared illumination. The image acquisition system can reduce energy usage by illuminating the infrared source (eg LEDs) when the device is in iris recognition mode, as shown in figure 23. 200 image acquisition using some features of the systems and methods disclosed in this document. The image acquisition system 200 can be incorporated into a device, such as a mobile and/or compact device. The device may include a screen with a sensor. Infrared LEDs can provide illumination. A user can use a touchscreen or other input device (eg keyboard, button, or voice command recognition) to switch between iris recognition mode and standard take image mode. The device may include an application through which a user can activate an image capture mode. The application can additionally provide a feedback or guidance mechanism to automatically locate the user's iris, or to guide the user to shift the user's iris within a suitable capture region. In some modalities, an optional IR-cut filter can be activated or moved over the image sensor, either manually or automatically, when in iris image capture mode. Other filters (eg IR pass filter) can be incorporated and/or activated in the appropriate mode(s). In certain embodiments, certain features of the image acquisition system 200 may be contained in an accessory or extension connection for an existing or mobile device. As an example, such features may include an infrared illuminator, one or more filters and/or an interface (eg, wireless or physical) to the existing or mobile device. In some embodiments, image acquisition system 200 may include infrared illuminators embedded in a screen of image acquisition system 200 to illuminate a user's eye with infrared illumination. Screens and displays typically use white LED lighting under an LCD matrix. By adding or replacing some portion of the visible light LEDs with near infrared illuminators, an IR light source can be provided by the display itself. In such an embodiment, the image acquisition system 200 may not require an additional lighting apparatus or area in the image acquisition system 200 to provide infrared illumination, thus saving space. In certain embodiments, the image acquisition system 200 can include a visible illuminator, for example, with two intensities of illumination. The visible illuminator can be turned on at low power during iris imaging mode. Low energy lighting can be chosen so as not to distract or cause discomfort for the user. In some modes, brightness level in low power mode may be at least a factor of 2 darker than the full brightness of the visible illuminator. The last level of brightness, for example, can be used to lighten a larger scene. The low energy visible illuminator can be used to constrict the iris and increase the iris area, regardless of whether the user is in the dark or not. However, since the visible illuminator can be close to the eye, some of the filters described above can still pass significant visible light to the sensor. Therefore, in some embodiments, visible light is turned off before iris images are taken while the near infrared illuminator is turned on. In an alternative modality, the screen itself can be used as a visible light source. In some embodiments, an advantage of using a single sensor in the image acquisition system 200 is that space taken up by the system can be minimized when compared to using a dual sensor. However, in both cases, an important consideration is the ability of the user and/or operator to use the single or dual sensor sensing device effectively. In some embodiments, a mirrored surface can be used to help guide a user to align the user's iris with an appropriate capture zone of the image sensor. A mirrored surface can provide feedback to the user regarding the user's position, as depicted in Figure 25, where a user is holding a device in front of him and a virtual image of a part of the user's face is seen at twice the distance from the user to the device. However, because of a property of the human visual system, ocular dominance, and the requirements of our iris recognition system, the optimal mirror size may not scale linearly with the user's distance from the mirror as might be expected. In fact, under some conditions, an increase in mirror size to try and improve iris recognition performance can degrade performance or cause difficulty in alignment. Ocular dominance is a tendency to prefer visual input into one eye or the other. It occurs in most individuals, with 2/3 of individuals having right eye dominance and 1/3 of individuals having left eye dominance. The present systems and methods address ocular dominance and combine ocular dominance properties with iris recognition constraints in order to maximize the size of retrieved iris images while minimizing the size of a mirror used to guide the user. Figure 26 represents a mirror's reflective field of view of a size such that both eyes comfortably occupy the field of view. In some embodiments the width of the mirror is such that, at the viewing distance of the image acquisition device 200, the reflective field of view may be at least approximately 50% larger than the reflection from the separation of eyes. For illustrative purposes the user is shown in the center of the mirror. Figure 27, however, shows that in practice, because of ocular dominance, a user is typically positioned to one side of the mirror, in such a way that their dominant eye is closer to the center of the mirror. If the width of the mirror's field of view is greater than 50% of the field of view of a typical user eye separation (6.5 - 7 cm), then the eyes may remain within the field of view. Therefore, both eyes can be obtained by the image acquisition system 200 for people with ocular dominance since both eyes can remain within the field of view of the image sensor in a case like this. However, the iris diameter in the captured image can be relatively small as a lens for the sensor is typically chosen to cover a wide field of view. Figure 28 represents, without consideration of ocular dominance, a configuration to obtain images of both eyes using a smaller mirror. The mirror's field of view is smaller, thus minimizing its area in any 200 imaging system. Both eyes can be obtained if the user is positioned in the center of the mirror. However, as described above, because of ocular dominance, the user is typically positioned to the right or left of this ideal position, as shown in Figures 29 and 30. In this scenario, one of the eyes may be outside the field of view of the camera. Thus, although this configuration has a moderately large mirror, and even if the lens can be configured to take both eyes (when in a central position), because of ocular dominance, the image acquisition system 200 can only take a single I look safely in practice. Figure 31 represents a design that obtains higher resolution iris images when compared to Figure 30 (ie, improving iris recognition performance), however uses a smaller mirror such that only the dominant eye is observed by the user. By limiting the mirror size so that only the dominant eye is in the field of view, the user's visual system's tendency to choose the left or right eye is forced to be a binary response (eg, right or left eye) , as opposed to a variable or unpredictable response (eg, eyes shifted to the right or left) in the field of view. In some embodiments, the image acquisition system 200 can operate or include a mirror with a diameter of about 14 mm at an operating distance of approximately 9" (22.86 centimeters), such that the reflective field of Mirror viewing corresponds to approximately two typical iris diameters (2 x 10.5 mm). Figure 32 summarizes and illustrates the size of the mirror's effective field of view and its relationship to capturing one or two eyes and also the size of the iris images obtained. Fig. 33 represents an embodiment of the image acquisition system 200 whereby an IR cut filter is placed over a portion of the sensor. A face or other image can be taken by a part of the sensor while an image for iris recognition is taken by a part covered by the IR cut filter. Ocular dominance tends to provide uncertainty in a horizontal direction because of the horizontal configuration of the human eyes and therefore the image acquisition system 200 can be correspondingly configured with a filtering region modeled horizontally on the sensor. Figure 34 represents another embodiment in which the mirror is tilted such that the user views the sensor/lens assembly at an angle, and the eyes are near the top of the sensor rather than in the middle of the sensor. This configuration can allow placement of the IR cut filter at one end of the sensor, thus allowing the sensor to have two distinct regions (IR cut and non-IR cut) instead of three regions (non-IR cut, IR cut and not IR-cut), which is the case illustrated in figure 33. This allows a larger and more contiguous non-iris portion of a scene to be obtained. Fig. 35 shows another embodiment of the image acquisition system 200 by which an operator can hold the image acquisition device 200 in order to obtain iris images from the user. In this mode, there is a transparent guide channel through which the operator can look to align with the user's eye. Additionally or alternatively, spaced guidance markers can be placed on top of the image acquisition device 200 so that the operator aligns the user's eye with two markers, for example. Figure 36 shows an expanded view of an embodiment of a guide channel. In this mode, circular rings can be printed on the inside of the guide channel, on the back and on the front of the guide channel as shown. When the user is aligned, these rings may appear to be concentric to the operator. Otherwise they will not be concentric (user's eye is misaligned). Figure 36 also shows a visible illuminator (LED) on the device, as well as infrared illuminators that can be used for the purpose of iris recognition. Figure 37 represents another modality of the image acquisition system. In this mode, the LEDs are controlled by controllers which in turn are connected to a processor which is also connected to the sensor used for iris recognition. An embodiment of a method for capturing images of an iris and a scene using a single image sensor is illustrated in Fig. 38. An image sensor captures a view of a scene and a view of an iris in at least one image (382). An image processing module applies a noise reduction level to a first part of the at least one image to produce an image of the scene (384). The image processing module applies a reduced level of noise reduction to a second part of the at least one image to produce an iris image for use in biometric identification (Step 386). Referring further to Fig. 38, and in more detail, an image sensor 202 of an image acquisition system 200 captures a view of a scene and a view of an iris in at least one image (382). The image sensor can capture the scene view in one image and the iris view in another image. In some modalities, the image sensor can capture the scene view and the iris view in a single image. For example, the scene view can include at least part of the iris. The image sensor can capture the scene view and the iris view in a plurality of images. The image sensor can capture the scene view in some images and the iris view in other images. The image sensor can capture scene view and iris view in some images. The image sensor can capture two or more images over a period of time. The image sensor can capture two or more images with a short time interval from one to the other, for example, for comparison or later processing. The image sensor can capture two or more images under different conditions, for example, with and without infrared illumination, or with or without the use of any type of filter discussed in this document. In some embodiments, image acquisition system 200 may comprise an iris capture mode and a scene (e.g., non-iris) capture mode. The image sensor can capture an image of the scene view in scene capture mode. The image sensor can capture an image of the iris view in iris capture mode. In certain embodiments, the image acquisition system 200 can concurrently perform iris and non-iris image capture in another mode. A user can select a mode for image acquisition, for example, through an application running on the image acquisition device 200. In some modalities, the image acquisition system can capture the scene view and the iris view as separable components within a single image. The image acquisition system can capture the scene view and/or the iris view using any modality and/or combination of the interleaved filter, IR cut filter, IR pass filter, and other types of filters described in this document. In some embodiments, the image sensor comprises a plurality of sensor nodes of the image sensor. The image sensor can activate a first subset of sensor nodes primarily adapted to capture an image of the iris suitable for biometric identification. The image sensor can activate a second subset of sensor nodes primarily adapted to capture a non-iris image. An IR pass filter (G+I) (eg, allowing G+I to pass), or another filter can be applied over a sensor node primarily adapted to capture an image of the iris. An IR-cut, visible-pass, specific band-pass or color filter can be applied over a sensor node primarily adapted to capture a non-iris image. In some modalities, the image sensor captures at least one image of the iris while illuminating the iris with infrared illumination. The image sensor can capture at least one iris image without infrared illumination. The image sensor can capture at least one image of the iris by turning off a visible light illuminator. The image sensor can capture at least one image of the iris using illumination from an image acquisition system 200 screen. The image sensor can capture at least one image of the iris when the iris is aligned with a portion of the sensor using a mirror of the image acquisition system 200 for guidance. The image sensor can capture at least one image of the iris when the iris is aligned with a portion of the sensor by an operator using a transparent guide channel and/or markers. Referring further to step 384, an image processing module can apply a noise reduction level to a first portion of the at least one image to produce an image of the scene. The image acquisition system 200 can apply noise reduction to an image captured by the image sensor. The image acquisition system 200 can apply noise reduction to an image stored in the image acquisition system 200, for example, on a storage device or in temporary storage. The image acquisition system 200 can apply noise reduction comprising applying an averaging or median or filtering function over some pixels of an image, for example over a 3x3 pixel window. The image acquisition system 200 can apply noise reduction comprising reducing one or both of both time-varying and non-time-varying noise to a captured image. The image acquisition system 200 can account for or exclude a known faulty pixel while performing image processing and/or noise reduction. Image acquisition system 200 can apply noise reduction using an image processing module that can include one or more image signal processors 206 and/or another processor 208. Image acquisition system 200 can apply noise reduction by identifying, considering and/or compensating for the presence of systematic noise. In some embodiments, the image processing module can apply noise reduction to an image captured in non-iris capture mode. The image processing module can apply a noise reduction level to a part of an image not for biometric iris identification, for example a part corresponding to an IR cut filter. The image processing module can apply noise reduction or filtering to a general or non-iris image. The image processing module can generate an image of a general scene that is noticeably better (for example, for a person) than an image before noise reduction. Referring further to step 386, the image processing module can apply a reduced level of noise reduction to a second portion of the at least one image to produce an iris image for use in biometric identification. In some embodiments, the image processing module can disable noise reduction in an image for use in biometric iris identification. The image processing module can determine that the noise level does not overwhelm the captured iris texture. The image processing module can perform biometric iris identification based on a raw or unprocessed image captured by the image sensor. The image processing module can perform biometric iris identification based on the image captured by the image sensor after some processing, for example, removal of artifacts, sporadic noise and/or systematic noise. In some embodiments, the image processing module can apply a reduced level of noise reduction to an image for use in biometric iris identification. The image processing module can apply a reduced level of noise reduction to an image captured while in iris capture mode. The image processing module can perform noise reduction for systematic and/or sporadic noise. The image processing module can disable noise reduction for unsystematic noise. The image processing module can apply a reduced level of noise reduction to a part of an extracted image for iris biometric identification, for example a part corresponding to an IR pass filter. The image processing module can apply systematic noise reduction to a part of an extracted image for iris biometric identification, for example a part corresponding to an IR pass filter. In some embodiments, image processing module 220 subtracts noise from one iris image with noise from another iris image. Such subtraction can result in reduced systematic noise and/or sporadic noise. Image processing module 220 can align two images together to perform subtraction. Image processing module 220 can align two images using common reference points (eg edge of shapes). Image processing module 220 can align two images by using recognition/comparison, pattern correlation, and/or other algorithms. The image processing module 220 can subtract noise corresponding to the overlapping portion of two images. Image processing module 220 can reduce ambient noise in one image by using ambient noise from another image. Ambient noise can comprise signs of light or ambient lighting. Ambient noise can comprise artifacts from surrounding light sources or reflections from surrounding objects on a surface of the eye. In some embodiments, image processing module 220 can reduce ambient noise from an image captured in the presence of infrared illumination by using ambient noise from another image captured without infrared illumination. In certain embodiments, the image processing module 220 can retrieve an infrared component from one or more pixels (G+I) imaged in a set of sensor nodes. The image processing module 220 can subtract the G component from (G+I) using a G intensity value at a neighboring pixel. In some embodiments, image processing module 220 can subtract the G component using an estimated G intensity value. The image processing module 220 can use the estimated G-intensity value when processing a non-iris part (eg general scene) of an image. In some embodiments, image processing module 220 can perform gain or brightness control or adjustment on a portion of the at least one image to produce an iris image for use in biometric identification. In some modalities, the amount of infrared illumination may be insufficient or suboptimal, so controlling or adjusting gain or brightness may improve iris image quality. In certain modalities, control or adjustment of gain or brightness may be preferable to add infrared illuminators, drag energy to provide infrared illumination, and/or control infrared illumination (eg under different conditions). Since infrared signals are captured by a fraction of the sensor nodes/pixels (eg in an RGB(G+I) array), compensation via control or adjustment of gain or brightness may be appropriate. Having described certain embodiments of the methods and systems, it will now be apparent to those skilled in the art that other embodiments incorporating the concepts of the invention can be used. It should be understood that the systems described above may provide multiple components of any or each of these components and these components may be provided on a standalone machine or, in some embodiments, on multiple machines in a distributed system. The systems and methods described above may be implemented as a method, apparatus or article of manufacture using programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof. Furthermore, the systems and methods described above may be provided as one or more computer-readable programs incorporated in one or more articles of manufacture. The term "article of manufacture" as used in this document is intended to encompass code or logic accessible and embedded in one or more computer readable devices, firmware, programmable logic, memory devices (eg, EEPROMs, ROMs, PROMs, RAMs , SRAMs, etc.), hardware (eg integrated circuit chip, Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), etc.), electronic devices, a non-volatile readable storage unit per computer (eg CD-ROM, floppy disk, hard disk drive, etc.). The article of manufacture may be accessible from a file server providing access to computer readable programs via a network transmission line, wireless transmission media, signal propagation through space, radio waves, infrared signals, etc. . The article of manufacture can be a flash memory card or a magnetic strip. The article of manufacture includes hardware logic as well as software or programmable code embedded in computer-readable media that is run by a processor. Generally speaking, computer readable programs can be implemented in any programming language such as LISP, PERL, C, C++, C#, PROLOG, or any byte encoding language such as JAVA. Software can be stored in one or more articles of manufacture as object code.
权利要求:
Claims (20) [0001] 1. A method for processing images acquired using a single image sensor characterized by comprising: acquiring, by an image sensor, a first image of an object at a predetermined distance from the image sensor, the first image including a signal component non-time-varying corresponding to the object and a random-time-varying signal component introduced by one or more noise sources; and processing the first image according to a selection of a first and a second mode, comprising: if the first mode is selected, acquiring the first image using a first part of the image sensor and processing the first image acquired through the first part, retaining signals from the non-varying and time-varying signal components with spatial frequencies at and below a predetermined threshold spatial frequency for iris recognition, wherein the object comprises an iris; and if the second mode is selected, acquiring the first image using a second part of the image sensor and processing the first image acquired through the second part by reducing at least a part of the signals from the non-varying and time-varying signal components with frequencies at or below the spatial frequency threshold. [0002] Method according to claim 1, characterized in that if the second mode is selected, processing the first image with a frequency-based signal reduction process or applying an average or median calculation to the first image. [0003] The method of claim 1, further comprising using a first type of filter on the first part of the image sensor and a second type of filter on the second part of the image sensor. [0004] Method according to claim 1, characterized in that the image sensor comprises a plurality of sensor nodes, the first part of the image sensor comprises a first subset of the sensor nodes and the second part of the image sensor comprises a second subset of the sensor nodes. [0005] 5. Method according to claim 1, characterized in that the predetermined threshold is the equivalent of 2 pairs of lines/mm for a 60% modulation transfer function for the object of a predetermined size and at a predetermined distance from to the image sensor. [0006] The method of claim 1, further comprising subtracting systematic noise from the first image. [0007] The method of claim 1, further comprising reducing ambient noise in the first image. [0008] Method, according to claim 1, characterized in that it further comprises reducing the ambient noise of one of the first images acquired in the presence of infrared illumination, using the ambient noise of another image acquired without infrared illumination. [0009] The method of claim 1, further comprising determining the deactivation of an infrared cut filter (IR cut) for the image sensor when acquiring the first image if the first mode is selected, and enabling the IR cut of the filter when acquiring the first image, if the second mode is selected. [0010] Method according to claim 1, characterized in that acquiring the first image comprises activating a plurality of sensor nodes of the image sensor, the first part comprising a first subset of the sensor nodes configured primarily to capture an image of the iris suitable for identification biometric and the second part comprising a second subset of sensor nodes configured primarily to capture a non-iris image. [0011] 11. Apparatus for processing images acquired using a single image sensor characterized by comprising: an image sensor configured to: acquire a first image of an object at a predetermined distance from the image sensor, the first image including a signal component non-time-varying corresponding to the object and a random-time-varying signal component introduced by one or more noise sources; and an image processing engine configured to process the first image corresponding to a selection of a first and a second mode, the image processing module configured to: if the first mode is selected, processing the acquired first image via a first portion of the image sensor retaining signals from the non-varying and time-varying signal components with spatial frequencies at and below a predetermined threshold spatial frequency for iris recognition, wherein the object comprises an iris; and if the second mode is selected, processing the first image acquired by means of a second part of the image sensor reducing at least a part of the signals from the non-varying and time-varying signal components with spatial frequencies at or below the spatial frequency. limit. [0012] Apparatus according to claim 11, characterized in that the image processing module is configured to process the first image with a frequency-based signal reduction process or apply an average or median calculation function to the first image, if the second mode is selected. [0013] Apparatus according to claim 11, characterized in that it further comprises a first type of filter on the first part of the image sensor and a second type of filter on the second part of the image sensor. [0014] Apparatus according to claim 11, characterized in that the image processing module comprises a plurality of sensor nodes, the first part of the image sensor comprises a first subset of the sensor nodes and the second part of the image sensor comprises a second subset of sensor nodes. [0015] Apparatus according to claim 11, characterized in that the predetermined limit is the equivalent of 2 pairs of lines/mm for a 60% modulation transfer function for the object of predetermined size and at a predetermined distance from the image sensor. [0016] 16. Apparatus according to claim 11, characterized in that the image processing module is additionally configured to subtract systemic noise from the first image. [0017] Apparatus according to claim 11, characterized in that the image sensor comprises a Complementary Metal Oxide Semiconductor (CMOS) sensor. [0018] Apparatus according to claim 11, characterized in that the image processing module is configured to reduce the ambient noise of the first image acquired in the presence of infrared illumination, using ambient noise of another image acquired without infrared illumination. [0019] 19. Apparatus according to claim 11, further comprising an infrared cut filter (IR cut) which is disabled for the image sensor when acquiring the first image if the first mode is selected, and enabled when acquiring the first image if the second mode is selected. [0020] Apparatus according to claim 11, characterized in that the image sensor comprises a plurality of sensor nodes, a first subset of the sensor nodes configured primarily to capture an iris image suitable for biometric identification, a second subset of the sensor nodes configured mainly to capture a non-iris image.
类似技术:
公开号 | 公开日 | 专利标题 BR112013021160B1|2021-06-22|METHOD AND APPARATUS FOR PROCESSING ACQUIRED IMAGES USING A SINGLE IMAGE SENSOR US10296791B2|2019-05-21|Mobile identity platform US9002073B2|2015-04-07|Mobile identity platform US9036871B2|2015-05-19|Mobility identity platform US20170091550A1|2017-03-30|Multispectral eye analysis for identity authentication US20160019420A1|2016-01-21|Multispectral eye analysis for identity authentication KR20150037628A|2015-04-08|Biometric camera KR20140050603A|2014-04-29|Mobile identity platform US20210165144A1|2021-06-03|Image pickup apparatus and image processing apparatus JPWO2019163066A1|2021-02-18|Spoofing detection device, spoofing detection method, and program JP2022028850A|2022-02-16|Spoofing detection device, spoofing detection method, and program WO2021153149A1|2021-08-05|Determination device JPWO2020121520A1|2021-10-14|Image processing device, authentication system, image processing method, authentication method, and program WO2021153148A1|2021-08-05|Determination device JP2022024996A|2022-02-09|Image judgment method and image judgment device JP2022024721A|2022-02-09|Image judgment method and image judgment device JP6759142B2|2020-09-23|Biometric device and method JP7004059B2|2022-01-21|Spoofing detection device, spoofing detection method, and program JPWO2019163065A1|2021-01-07|Spoofing detection device, spoofing detection method, and program EP3387675B1|2020-11-25|Image sensor configured for dual mode operation
同族专利:
公开号 | 公开日 RU2013142254A|2015-03-27| EP2676223A4|2016-08-10| CN103477351A|2013-12-25| KR20140049980A|2014-04-28| CN103477351B|2019-06-28| WO2012112788A2|2012-08-23| EP2676223A2|2013-12-25| US20120212597A1|2012-08-23| US10116888B2|2018-10-30| BR112013021160A2|2019-10-01| KR102024949B1|2019-09-24| RU2589859C2|2016-07-10| WO2012112788A3|2013-01-03| US20160191827A1|2016-06-30| US9280706B2|2016-03-08|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US4231661A|1978-09-20|1980-11-04|Becton, Dickinson & Company|Radial scanner| US4641349A|1985-02-20|1987-02-03|Leonard Flom|Iris recognition system| US4910725A|1986-04-23|1990-03-20|Drexler Technology Corporation|Optical recording method for data cards| US4923263A|1988-09-22|1990-05-08|The United States Of America As Represented By The Secretary Of The Army|Rotating mirror optical scanning device| US5140469A|1990-10-03|1992-08-18|Raytheon Company|Illumination apparatus for semiconductor fabrication including conical optical means| US5291560A|1991-07-15|1994-03-01|Iri Scan Incorporated|Biometric personal identification system based on iris analysis| US5259040A|1991-10-04|1993-11-02|David Sarnoff Research Center, Inc.|Method for determining sensor motion and scene structure and image processing system therefor| US5488675A|1994-03-31|1996-01-30|David Sarnoff Research Center, Inc.|Stabilizing estimate of location of target region inferred from tracked multiple landmark regions of a video image| US5572596A|1994-09-02|1996-11-05|David Sarnoff Research Center, Inc.|Automated, non-invasive iris recognition system and method| US6714665B1|1994-09-02|2004-03-30|Sarnoff Corporation|Fully automated iris recognition system utilizing wide and narrow fields of view| US5802199A|1994-11-28|1998-09-01|Smarttouch, Llc|Use sensitive identification system| US5805719A|1994-11-28|1998-09-08|Smarttouch|Tokenless identification of individuals| US7613659B1|1994-11-28|2009-11-03|Yt Acquisition Corporation|System and method for processing tokenless biometric electronic transmissions using an electronic rule module clearinghouse| US6192142B1|1994-11-28|2001-02-20|Smarttouch, Inc.|Tokenless biometric electronic stored value transactions| US5613012A|1994-11-28|1997-03-18|Smarttouch, Llc.|Tokenless identification system for authorization of electronic transactions and electronic transmissions| US5764789A|1994-11-28|1998-06-09|Smarttouch, Llc|Tokenless biometric ATM access system| US5615277A|1994-11-28|1997-03-25|Hoffman; Ned|Tokenless security system for authorizing access to a secured computer system| US6366682B1|1994-11-28|2002-04-02|Indivos Corporation|Tokenless electronic transaction system| US7248719B2|1994-11-28|2007-07-24|Indivos Corporation|Tokenless electronic transaction system| US5581629A|1995-01-30|1996-12-03|David Sarnoff Research Center, Inc|Method for estimating the location of an image target region from tracked multiple image landmark regions| JPH09212644A|1996-02-07|1997-08-15|Oki Electric Ind Co Ltd|Iris recognition device and iris recognition method| US5737439A|1996-10-29|1998-04-07|Smarttouch, Llc.|Anti-fraud biometric scanner that accurately detects blood flow| US6545810B1|1997-03-06|2003-04-08|Olympus Optical Co., Ltd.|Image pickup optical system and image pickup apparatus using the same| US6144754A|1997-03-28|2000-11-07|Oki Electric Industry Co., Ltd.|Method and apparatus for identifying individuals| KR100259475B1|1997-04-14|2000-06-15|최환수|Method for the identification of individuals using the pattern of blood vessels| US6373968B2|1997-06-06|2002-04-16|Oki Electric Industry Co., Ltd.|System for identifying individuals| JP3371764B2|1997-06-27|2003-01-27|株式会社日立製作所|Imaging method and apparatus| FR2766943B1|1997-07-30|1999-10-15|United Barcode Ind Scanner Tec|OPTOELECTRONIC DEVICE FOR MULTIDIRECTIONAL ACQUISITION OF PLANE OBJECT IMAGES, ESPECIALLY BAR CODES| US6246751B1|1997-08-11|2001-06-12|International Business Machines Corporation|Apparatus and methods for user identification to deny access or service to unauthorized users| US6554705B1|1997-08-22|2003-04-29|Blake Cumbers|Passive biometric customer identification and tracking system| US6064752A|1997-11-04|2000-05-16|Sensar, Inc.|Method and apparatus for positioning subjects before a single camera| US6069967A|1997-11-04|2000-05-30|Sensar, Inc.|Method and apparatus for illuminating and imaging eyes through eyeglasses| US6021210A|1997-12-01|2000-02-01|Sensar, Inc.|Image subtraction to remove ambient illumination| US6055322A|1997-12-01|2000-04-25|Sensor, Inc.|Method and apparatus for illuminating and imaging eyes through eyeglasses using multiple sources of illumination| US6028949A|1997-12-02|2000-02-22|Mckendall; Raymond A.|Method of verifying the presence of an eye in a close-up image| US5953440A|1997-12-02|1999-09-14|Sensar, Inc.|Method of measuring the focus of close-up images of eyes| US6088470A|1998-01-27|2000-07-11|Sensar, Inc.|Method and apparatus for removal of bright or dark spots by the fusion of multiple images| US6980670B1|1998-02-09|2005-12-27|Indivos Corporation|Biometric tokenless electronic rewards system and method| US6850631B1|1998-02-20|2005-02-01|Oki Electric Industry Co., Ltd.|Photographing device, iris input device and iris image input method| US5978494A|1998-03-04|1999-11-02|Sensar, Inc.|Method of selecting the best enroll image for personal identification| JP3271750B2|1998-03-05|2002-04-08|沖電気工業株式会社|Iris identification code extraction method and device, iris recognition method and device, data encryption device| US6847737B1|1998-03-13|2005-01-25|University Of Houston System|Methods for performing DAF data filtering and padding| JP3315648B2|1998-07-17|2002-08-19|沖電気工業株式会社|Iris code generation device and iris recognition system| KR100344587B1|1998-09-30|2003-03-26|삼성전자 주식회사|Fault Inspection System| US6381347B1|1998-11-12|2002-04-30|Secugen|High contrast, low distortion optical acquistion system for image capturing| US6289113B1|1998-11-25|2001-09-11|Iridian Technologies, Inc.|Handheld iris imaging apparatus and method| US6532298B1|1998-11-25|2003-03-11|Iridian Technologies, Inc.|Portable authentication device and method using iris patterns| US6377699B1|1998-11-25|2002-04-23|Iridian Technologies, Inc.|Iris imaging telephone security module and method| US6424727B1|1998-11-25|2002-07-23|Iridian Technologies, Inc.|System and method of animal identification and animal transaction authorization using iris patterns| US6320610B1|1998-12-31|2001-11-20|Sensar, Inc.|Compact imaging device incorporating rotatably mounted cameras| KR100320465B1|1999-01-11|2002-01-16|구자홍|Iris recognition system| US6944318B1|1999-01-15|2005-09-13|Citicorp Development Center, Inc.|Fast matching systems and methods for personal identification| JP3463612B2|1999-01-21|2003-11-05|日本電気株式会社|Image input method, image input device, and recording medium| JP2000259278A|1999-03-12|2000-09-22|Fujitsu Ltd|Device and method for performing indivisual authentication by using living body information| KR100320188B1|1999-03-23|2002-01-10|구자홍|Forgery judgment method for iris recognition system| US6247813B1|1999-04-09|2001-06-19|Iritech, Inc.|Iris identification system and method of identifying a person through iris recognition| US6700998B1|1999-04-23|2004-03-02|Oki Electric Industry Co, Ltd.|Iris registration unit| JP2000347352A|1999-06-04|2000-12-15|Fuji Photo Film Co Ltd|Photographic lens and film unit equipped with lens| US6515781B2|1999-08-05|2003-02-04|Microvision, Inc.|Scanned imaging apparatus with switched feeds| US7020351B1|1999-10-08|2006-03-28|Sarnoff Corporation|Method and apparatus for enhancing and indexing video and audio signals| US6701029B1|1999-11-08|2004-03-02|Automatic Recognition And Control, Inc.|Ring-wedge data analysis of digital images| JP4265076B2|2000-03-31|2009-05-20|沖電気工業株式会社|Multi-angle camera and automatic photographing device| US6810135B1|2000-06-29|2004-10-26|Trw Inc.|Optimized human presence detection through elimination of background interference| US7346472B1|2000-09-07|2008-03-18|Blue Spike, Inc.|Method and device for monitoring and analyzing signals| KR100416065B1|2000-09-23|2004-01-24|주식회사 큐리텍|Media that can record computer program sources to analysis humanity and constitution by pattern of iris, system and method thereof| KR100373850B1|2000-10-07|2003-02-26|주식회사 큐리텍|Identification system and method using iris, and media that can record computer program sources thereof| US6819219B1|2000-10-13|2004-11-16|International Business Machines Corporation|Method for biometric-based authentication in wireless communication for access control| US6763148B1|2000-11-13|2004-07-13|Visual Key, Inc.|Image recognition methods| KR100649303B1|2000-11-16|2006-11-24|엘지전자 주식회사|Apparatus of taking pictures in iris recognition system based on both of eyes's images| US7047418B1|2000-11-29|2006-05-16|Applied Minds, Inc.|Imaging method and device using biometric information for operator authentication| US6930707B2|2000-12-22|2005-08-16|International Business Machines Corporation|Digital camera apparatus with biometric capability| FR2819327B1|2001-01-10|2003-04-18|Sagem|OPTICAL IDENTIFICATION DEVICE| GB2372165A|2001-02-10|2002-08-14|Hewlett Packard Co|A method of selectively storing images| SE0100887D0|2001-03-15|2001-03-15|Fingerprint Cards Ab|Device and method for processing fingerprint information| US7095901B2|2001-03-15|2006-08-22|Lg Electronics, Inc.|Apparatus and method for adjusting focus position in iris recognition system| US6920236B2|2001-03-26|2005-07-19|Mikos, Ltd.|Dual band biometric identification system| KR20020078225A|2001-04-06|2002-10-18|주식회사 큐리텍|System and method for diagnosing remotely using iris, sclera, retina information and strorage media having program source thereof| US7103235B2|2001-04-25|2006-09-05|Lockheed Martin Corporation|Extended range image processing for electro-optical systems| US8279042B2|2001-07-10|2012-10-02|Xatra Fund Mx, Llc|Iris scan biometrics on a payment device| AU2002324605A1|2001-08-03|2003-02-17|Joseph A Izatt|Real-time imaging system and method| KR20030034258A|2001-08-04|2003-05-09|주식회사 큐리텍|Identification system and method using iris and retina, and media that can record computer program sources thereof| US7106366B2|2001-12-19|2006-09-12|Eastman Kodak Company|Image capture system incorporating metadata to facilitate transcoding| KR20030051970A|2001-12-20|2003-06-26|주식회사 큐리텍|Iris registration and recognition system| KR100854890B1|2001-12-28|2008-08-28|엘지전자 주식회사|Iris recording and recognition method using of several led for iris recognition system| CN100350420C|2002-01-16|2007-11-21|虹膜技术公司|System and method for iris identification using stereoscopic face recognition| US7715595B2|2002-01-16|2010-05-11|Iritech, Inc.|System and method for iris identification using stereoscopic face recognition| US6950536B2|2002-01-25|2005-09-27|Houvener Robert C|High volume mobile identity verification system and method using tiered biometric analysis| US7362354B2|2002-02-12|2008-04-22|Hewlett-Packard Development Company, L.P.|Method and system for assessing the photo quality of a captured image in a digital still camera| AUPS158302A0|2002-04-09|2002-05-16|Scan Optics Pty Ltd|Improved fundus camera| JP2003323607A|2002-04-30|2003-11-14|Matsushita Electric Ind Co Ltd|Iris image pickup device| US7598975B2|2002-06-21|2009-10-06|Microsoft Corporation|Automatic face extraction for use in recorded meetings timelines| US20040042643A1|2002-08-28|2004-03-04|Symtron Technology, Inc.|Instant face recognition system| JP4062031B2|2002-09-25|2008-03-19|セイコーエプソン株式会社|Gamma correction method, gamma correction apparatus and image reading system| US7385626B2|2002-10-21|2008-06-10|Sarnoff Corporation|Method and system for performing surveillance| US7236534B1|2002-11-06|2007-06-26|Jabil Circuit, Inc.|Method and apparatus for noise reduction by spectral and subcarrier averaging| KR20030005113A|2002-12-04|2003-01-15|주식회사 큐리텍|Identification system and method using iris, and media that can record computer program sources thereof| FR2851673B1|2003-02-20|2005-10-14|Sagem|METHOD FOR IDENTIFYING PEOPLE AND SYSTEM FOR IMPLEMENTING THE METHOD| US7369759B2|2003-03-27|2008-05-06|Matsushita Electric Industrial Co., Ltd.|Eye image pickup apparatus, iris authentication apparatus and portable terminal device having iris authentication function| KR200321670Y1|2003-05-07|2003-07-31|주식회사 큐리텍|Iris identification camera| KR20050005336A|2003-07-01|2005-01-13|주식회사 큐리텍|Mobile Device Dual Camera system for Iris Identification| US7152782B2|2003-07-11|2006-12-26|Visa International Service Association|System and method for managing electronic data transfer applications| JP4461739B2|2003-08-18|2010-05-12|ソニー株式会社|Imaging device| JP2007504562A|2003-09-04|2007-03-01|サーノフコーポレーション|Method and apparatus for performing iris authentication from a single image| US8705808B2|2003-09-05|2014-04-22|Honeywell International Inc.|Combined face and iris recognition system| JP3781028B2|2003-10-01|2006-05-31|松下電器産業株式会社|Eye imaging device| FR2860629B1|2003-10-01|2005-12-02|Sagem|DEVICE FOR POSITIONING A USER BY REPERAGE ON BOTH EYES| KR200340273Y1|2003-10-16|2004-01-31|주식회사 큐리텍|iris induction installation| KR100572626B1|2003-11-04|2006-04-24|주식회사 큐리텍|registration apparatus for iris| KR100580630B1|2003-11-19|2006-05-16|삼성전자주식회사|Apparatus and method for discriminating person using infrared rays| JP3945474B2|2003-11-28|2007-07-18|松下電器産業株式会社|Eye image input device, authentication device, and image processing method| KR20050051861A|2003-11-28|2005-06-02|주식회사 큐리텍|Multi camera system| US7398925B2|2003-12-09|2008-07-15|First Data Corporation|Systems and methods for assessing the risk of a financial transaction using biometric information| FR2864290B1|2003-12-18|2006-05-26|Sagem|METHOD AND DEVICE FOR RECOGNIZING IRIS| WO2005059828A1|2003-12-19|2005-06-30|Matsushita Electric Industrial Co., Ltd.|Iris image pickup camera and iris authentication system| US7212330B2|2004-03-22|2007-05-01|Angstrom, Inc.|Three-dimensional imaging system for pattern recognition| KR200352669Y1|2004-03-25|2004-06-05|주식회사 큐리텍|Identification card counterfeit discernment device| US7542592B2|2004-03-29|2009-06-02|Siemesn Corporate Research, Inc.|Systems and methods for face detection and recognition using infrared imaging| KR200355279Y1|2004-04-13|2004-07-05|주식회사 큐리텍|Iris and normal photographing combined use camera| US7542590B1|2004-05-07|2009-06-02|Yt Acquisition Corporation|System and method for upgrading biometric data| FR2870948B1|2004-05-25|2006-09-01|Sagem|DEVICE FOR POSITIONING A USER BY DISPLAYING ITS MIRROR IMAGE, IMAGE CAPTURE DEVICE AND CORRESPONDING POSITIONING METHOD| JP2005334402A|2004-05-28|2005-12-08|Sanyo Electric Co Ltd|Method and device for authentication| US7639840B2|2004-07-28|2009-12-29|Sarnoff Corporation|Method and apparatus for improved video surveillance through classification of detected objects| EP1769635A2|2004-06-01|2007-04-04|L-3 Communications Corporation|Modular immersive surveillance processing system and method.| CA2571643C|2004-06-21|2011-02-01|Nevengineering, Inc.|Single image based multi-biometric system and method| FR2871910B1|2004-06-22|2006-09-22|Sagem|BIOMETRIC DATA ENCODING METHOD, IDENTITY CONTROL METHOD, AND DEVICES FOR IMPLEMENTING METHODS| US7929017B2|2004-07-28|2011-04-19|Sri International|Method and apparatus for stereo, multi-camera tracking and RF and video track fusion| US8289390B2|2004-07-28|2012-10-16|Sri International|Method and apparatus for total situational awareness and monitoring| US7558406B1|2004-08-03|2009-07-07|Yt Acquisition Corporation|System and method for employing user information| US8190907B2|2004-08-11|2012-05-29|Sony Computer Entertainment Inc.|Process and apparatus for automatically identifying user of consumer electronics| KR200367917Y1|2004-08-16|2004-11-17|주식회사 큐리텍|Iris identification camera module having stable actinomenter| WO2006023647A1|2004-08-18|2006-03-02|Sarnoff Corporation|Systeme and method for monitoring training environment| US8402040B2|2004-08-20|2013-03-19|Morphotrust Usa, Inc.|Method and system to authenticate an object| CA2568633C|2004-10-15|2008-04-01|Oren Halpern|A system and a method for improving the captured images of digital still cameras| KR100682898B1|2004-11-09|2007-02-15|삼성전자주식회사|Imaging apparatus using infrared ray and image discrimination method thereof| US7616788B2|2004-11-12|2009-11-10|Cogent Systems, Inc.|System and method for fast biometric pattern matching| KR100629550B1|2004-11-22|2006-09-27|아이리텍 잉크|Multiscale Variable Domain Decomposition Method and System for Iris Identification| WO2006063076A2|2004-12-07|2006-06-15|Aoptix Technologies|Iris imaging using reflection from the eye| US7869627B2|2004-12-07|2011-01-11|Aoptix Technologies, Inc.|Post processing of iris images to increase image quality| US7418115B2|2004-12-07|2008-08-26|Aoptix Technologies, Inc.|Iris imaging using reflection from the eye| KR200383808Y1|2005-01-21|2005-05-10|주식회사 큐리텍|Iris Image Acquisition Camera| JP4641424B2|2005-02-02|2011-03-02|キヤノン株式会社|Imaging device| US7668388B2|2005-03-03|2010-02-23|Mitutoyo Corporation|System and method for single image focus assessment| US7697786B2|2005-03-14|2010-04-13|Sarnoff Corporation|Method and apparatus for detecting edges of an object| US7542628B2|2005-04-11|2009-06-02|Sarnoff Corporation|Method and apparatus for providing strobed image capture| FR2884947B1|2005-04-25|2007-10-12|Sagem|METHOD FOR ACQUIRING THE SHAPE OF THE IRIS OF AN EYE| KR200404650Y1|2005-08-02|2005-12-27|주식회사 큐리텍|Mouse having iris identification system| WO2007018008A1|2005-08-08|2007-02-15|Matsushita Electric Industrial Co., Ltd.|Image synthesis device and image synthesis method| US20070098229A1|2005-10-27|2007-05-03|Quen-Zong Wu|Method and device for human face detection and recognition used in a preset environment| US8260008B2|2005-11-11|2012-09-04|Eyelock, Inc.|Methods for performing biometric recognition of a human eye and corroboration of same| US7801335B2|2005-11-11|2010-09-21|Global Rainmakers Inc.|Apparatus and methods for detecting the presence of a human eye| FR2895122B1|2005-12-16|2008-02-01|Sagem Defense Securite|METHOD OF SECURING PHYSICAL ACCESS AND PROVIDING ACCESS TO THE PROCESS| US20070145273A1|2005-12-22|2007-06-28|Chang Edward T|High-sensitivity infrared color camera| US7545962B2|2005-12-22|2009-06-09|Daon Holdings Limited|Biometric authentication system| KR100729813B1|2006-01-20|2007-06-18|자이리스|Photographing appararus for iris authentication, photographing module for iris authentication and terminal having the same| FR2896604B1|2006-01-23|2008-12-26|Sagem Defense Securite|METHODS FOR DETERMINING AN IDENTIFIER AND BIOMETRIC VERIFICATION AND ASSOCIATED SYSTEMS| US8364646B2|2006-03-03|2013-01-29|Eyelock, Inc.|Scalable searching of biometric databases using dynamic selection of data subsets| US20080089554A1|2006-03-03|2008-04-17|Catcher Inc.|Device and method for digitally watermarking an image with data| US20070211922A1|2006-03-10|2007-09-13|Crowley Christopher W|Integrated verification and screening system| JP2007249556A|2006-03-15|2007-09-27|Fujitsu Ltd|Individual authentication system, method and program using biological information| CA2644545C|2006-03-23|2013-01-22|Amo Manufacturing Usa, Llc|Systems and methods for wavefront reconstruction for aperture with arbitrary shape| FR2899357B1|2006-03-29|2008-06-20|Sagem Defense Securite|PROCESSING BIOMETRIC DATA IN A MULTI DIMENSIONAL REFERENTIAL.| WO2007127157A2|2006-04-28|2007-11-08|Retica Systems, Inc.|System and method for biometric retinal identification| FR2900482B1|2006-04-28|2008-06-20|Sagem Defense Securite|METHOD FOR IDENTIFYING A PERSON BY ANALYZING THE CTERISTIC CARA OF ITS CILES| US8014571B2|2006-05-15|2011-09-06|Identix Incorporated|Multimodal ocular biometric system| FR2901898B1|2006-06-06|2008-10-17|Sagem Defense Securite|IDENTIFICATION METHOD AND ACQUIRING DEVICE FOR CARRYING OUT SAID METHOD| KR100757167B1|2006-06-09|2007-09-07|엘지이노텍 주식회사|Mobile phone with image signal processor for capture biometrics image pickup and method for operating the same| US8604901B2|2006-06-27|2013-12-10|Eyelock, Inc.|Ensuring the provenance of passengers at a transportation facility| FR2903513B1|2006-07-10|2008-12-05|Sagem Defense Securite|METHOD FOR IDENTIFYING AN INDIVIDUAL USING A TRANSFORMATION FUNCTION AND ASSOCIATED IDENTIFICATION DEVICE| JP4752660B2|2006-07-28|2011-08-17|沖電気工業株式会社|Personal authentication method and personal authentication device| US7609958B2|2006-08-01|2009-10-27|Eastman Kodak Company|Automatic focus system calibration for image capture systems| US9235733B2|2006-08-11|2016-01-12|J. Douglas Birdwell|Mobile biometrics information collection and identification| US7634114B2|2006-09-01|2009-12-15|Sarnoff Corporation|Method and apparatus for iris biometric systems for use in an entryway| US7916908B1|2006-09-06|2011-03-29|SMSC Holdings S.à.r.l|Fingerprint sensor and method of transmitting a sensor image to reduce data size and data rate| US7574021B2|2006-09-18|2009-08-11|Sarnoff Corporation|Iris recognition for a secure facility| FR2906387A1|2006-09-21|2008-03-28|St Microelectronics Sa|METHOD AND DEVICE FOR SELECTING IMAGES IN A SEQUENCE OF IMAGES OF IRIS RECEIVED IN CONTINUOUS FLOW| EP2076871A4|2006-09-22|2015-09-16|Eyelock Inc|Compact biometric acquisition system and method| JP4650386B2|2006-09-29|2011-03-16|沖電気工業株式会社|Personal authentication system and personal authentication method| EP2100253A4|2006-10-02|2011-01-12|Global Rainmakers Inc|Fraud resistant biometric financial transaction system and method| US20130212655A1|2006-10-02|2013-08-15|Hector T. Hoyos|Efficient prevention fraud| WO2008054396A1|2006-11-03|2008-05-08|Snowflake Technologies Corporation|Method and apparatus for extraction and matching of biometric detail| RU2318438C1|2006-11-28|2008-03-10|Дмитрий Евгеньевич Антонов|Device for getting image of iris| WO2008092156A1|2007-01-26|2008-07-31|Aoptix Technologies, Inc.|Combined iris imager and wavefront sensor| US8092021B1|2007-01-26|2012-01-10|Aoptix Technologies, Inc.|On-axis illumination for iris imaging| FR2912532B1|2007-02-14|2009-04-03|Sagem Defense Securite|SECURED BIOMETRIC CAPTURE DEVICE| US20090074256A1|2007-03-05|2009-03-19|Solidus Networks, Inc.|Apparatus and methods for testing biometric equipment| US8023699B2|2007-03-09|2011-09-20|Jiris Co., Ltd.|Iris recognition system, a method thereof, and an encryption system using the same| WO2008131201A1|2007-04-19|2008-10-30|Global Rainmakers, Inc.|Method and system for biometric recognition| US7568802B2|2007-05-09|2009-08-04|Honeywell International Inc.|Eye-safe near infra-red imaging illumination method and system| US20120239458A9|2007-05-18|2012-09-20|Global Rainmakers, Inc.|Measuring Effectiveness of Advertisements and Linking Certain Consumer Activities Including Purchases to Other Activities of the Consumer| US20090268045A1|2007-08-02|2009-10-29|Sudipto Sur|Apparatus and methods for configuration and optimization of image sensors for gaze tracking applications| US9036871B2|2007-09-01|2015-05-19|Eyelock, Inc.|Mobility identity platform| WO2009029765A1|2007-09-01|2009-03-05|Global Rainmakers, Inc.|Mirror system and method for acquiring biometric data| US8212870B2|2007-09-01|2012-07-03|Hanna Keith J|Mirror system and method for acquiring biometric data| US9002073B2|2007-09-01|2015-04-07|Eyelock, Inc.|Mobile identity platform| US9117119B2|2007-09-01|2015-08-25|Eyelock, Inc.|Mobile identity platform| WO2009066770A1|2007-11-22|2009-05-28|Nikon Corporation|Digital camera and digital camera system| FR2924247B1|2007-11-22|2009-11-13|Sagem Securite|METHOD OF IDENTIFYING A PERSON BY ITS IRIS| US20090157454A1|2007-12-14|2009-06-18|Bank Of America Corporation|Transaction control methods for use in financial transactions and information banking| FR2925732B1|2007-12-21|2010-02-12|Sagem Securite|GENERATION AND USE OF A BIOMETRIC KEY| KR101014325B1|2008-02-11|2011-02-14|김동의|Face recognition system and method using the infrared rays| JP5055166B2|2008-02-29|2012-10-24|キヤノン株式会社|Eye open / closed degree determination device, method and program, and imaging device| KR20090106791A|2008-04-07|2009-10-12|아이리사 아이디|Iris scanning device using spherical mirror| US20090268030A1|2008-04-29|2009-10-29|Honeywell International Inc.|Integrated video surveillance and cell phone tracking system| US20090273562A1|2008-05-02|2009-11-05|International Business Machines Corporation|Enhancing computer screen security using customized control of displayed content area| US9131141B2|2008-05-12|2015-09-08|Sri International|Image sensor with integrated region of interest calculation for iris capture, autofocus, and gain control| WO2009158662A2|2008-06-26|2009-12-30|Global Rainmakers, Inc.|Method of reducing visibility of illimination while acquiring high quality imagery| US8243133B1|2008-06-28|2012-08-14|Aoptix Technologies, Inc.|Scale-invariant, resolution-invariant iris imaging using reflection from the eye| US8132912B1|2008-06-29|2012-03-13|Aoptix Technologies, Inc.|Iris imaging system using circular deformable mirror mounted by its circumference| US8159328B2|2008-07-16|2012-04-17|George William Luckhardt|Biometric authentication and verification| US9053524B2|2008-07-30|2015-06-09|Fotonation Limited|Eye beautification under inaccurate localization| US8090246B2|2008-08-08|2012-01-03|Honeywell International Inc.|Image acquisition system| FR2935508B1|2008-09-01|2010-09-17|Sagem Securite|METHOD FOR DETERMINING A PSEUDO-IDENTITY FROM MINUTE CHARACTERISTICS AND ASSOCIATED DEVICE| US8306279B2|2008-09-15|2012-11-06|Eyelock, Inc.|Operator interface for face and iris recognition devices| KR101030613B1|2008-10-08|2011-04-20|아이리텍 잉크|The Region of Interest and Cognitive Information Acquisition Method at the Eye Image| US20100278394A1|2008-10-29|2010-11-04|Raguin Daniel H|Apparatus for Iris Capture| US8317325B2|2008-10-31|2012-11-27|Cross Match Technologies, Inc.|Apparatus and method for two eye imaging for iris identification| KR101017798B1|2008-11-03|2011-03-02|서울대학교산학협력단|Method and apparatus for finger vein identification using mean curvature| US8687856B2|2008-11-26|2014-04-01|Bioptigen, Inc.|Methods, systems and computer program products for biometric identification by tissue imaging using optical coherence tomography | US8787623B2|2008-11-26|2014-07-22|Bioptigen, Inc.|Methods, systems and computer program products for diagnosing conditions using unique codes generated from a multidimensional image of a sample| AU2009336872B2|2009-01-07|2015-07-23|Idemia Identity & Security Germany Ag|Apparatus for a checkpoint| US7912252B2|2009-02-06|2011-03-22|Robert Bosch Gmbh|Time-of-flight sensor-assisted iris capture system and method| US8195044B2|2009-03-30|2012-06-05|Eyelock Inc.|Biometric camera mount system| US8446149B2|2009-04-17|2013-05-21|Siemens Medical Solutions Usa, Inc.|System for improved MR image reconstruction| US8963949B2|2009-04-22|2015-02-24|Qualcomm Incorporated|Image selection and combination method and device| US8443202B2|2009-08-05|2013-05-14|Daon Holdings Limited|Methods and systems for authenticating users| WO2011029064A1|2009-09-04|2011-03-10|University Of Virginia Patent Foundation|Hand-held portable fundus camera for screening photography| US20110119141A1|2009-11-16|2011-05-19|Hoyos Corporation|Siccolla Identity Verification Architecture and Tool| WO2011093538A1|2010-01-27|2011-08-04|Iris Id|Iris scanning apparatus employing wide-angle camera, for identifying subject, and method thereof| RU97839U1|2010-04-27|2010-09-20|Дмитрий Евгеньевич Антонов|DEVICE FOR PREPARING IMAGES OF IRIS OF THE EYES| US8477372B2|2010-05-18|2013-07-02|Xerox Corporation|Updating an image quality metric database to account for printer drift| KR101197678B1|2010-06-23|2012-11-28|주식회사 아이락글로벌|Health Care System and Method through Iridology| US8903142B2|2010-07-12|2014-12-02|Fingerprint Cards Ab|Biometric verification device and method| WO2012008972A1|2010-07-16|2012-01-19|Hewlett-Packard Development Company, L.P.|Methods and systems for establishing eye contact and accurate gaze in remote collaboration| US8719584B2|2010-10-26|2014-05-06|Bi2 Technologies, LLC|Mobile, wireless hand-held biometric capture, processing and communication system and method for biometric identification| US8938105B2|2010-10-28|2015-01-20|Kabushiki Kaisha Toshiba|Denoising method and system for preserving clinically significant structures in reconstructed images using adaptively weighted anisotropic diffusion filter| US8666895B2|2011-01-31|2014-03-04|Bank Of America Corporation|Single action mobile transaction device| CN103477351B|2011-02-17|2019-06-28|眼锁有限责任公司|For the high efficiency method and system using single sensor acquisition scene image and iris image| WO2013109295A2|2011-04-06|2013-07-25|Eyelock, Inc.|Mobile identity platform| US8824749B2|2011-04-05|2014-09-02|Microsoft Corporation|Biometric recognition| CN103797495A|2011-04-19|2014-05-14|眼锁股份有限公司|Biometric chain of provenance| US9124798B2|2011-05-17|2015-09-01|Eyelock Inc.|Systems and methods for illuminating an iris with visible light for biometric acquisition| RU2623795C2|2011-08-22|2017-06-29|АЙЛОК ЭлЭлСи|Systems and methods for capturing non-artifact images| US20130088583A1|2011-10-07|2013-04-11|Aoptix Technologies, Inc.|Handheld Iris Imager| US9195900B2|2011-11-21|2015-11-24|Pixart Imaging Inc.|System and method based on hybrid biometric detection| US9231729B2|2012-02-28|2016-01-05|Spatial Digital Systems, Inc.|Resource allocation in PON networks via wave-front multiplexing and de-multiplexing| KR101366748B1|2012-03-20|2014-02-24|주식회사 이리언스|System and method for website security login with iris scan| US8831295B2|2012-03-21|2014-09-09|Authentec, Inc.|Electronic device configured to apply facial recognition based upon reflected infrared illumination and related methods| US8411909B1|2012-06-26|2013-04-02|Google Inc.|Facial recognition| KR101374049B1|2012-08-20|2014-03-12|주식회사 이리언스|Improved iris certification system and improved iris certification method| KR101995566B1|2012-08-31|2019-07-03|주식회사 이리언스|Iris Identifying System with Guide| KR101455609B1|2012-09-25|2014-10-28|주식회사 이리언스|Payment system and method using iris information| KR102010421B1|2012-10-19|2019-08-14|주식회사 이리언스|System and method for certificate management|US8260008B2|2005-11-11|2012-09-04|Eyelock, Inc.|Methods for performing biometric recognition of a human eye and corroboration of same| US8364646B2|2006-03-03|2013-01-29|Eyelock, Inc.|Scalable searching of biometric databases using dynamic selection of data subsets| US8604901B2|2006-06-27|2013-12-10|Eyelock, Inc.|Ensuring the provenance of passengers at a transportation facility| EP2076871A4|2006-09-22|2015-09-16|Eyelock Inc|Compact biometric acquisition system and method| EP2100253A4|2006-10-02|2011-01-12|Global Rainmakers Inc|Fraud resistant biometric financial transaction system and method| WO2008131201A1|2007-04-19|2008-10-30|Global Rainmakers, Inc.|Method and system for biometric recognition| US8953849B2|2007-04-19|2015-02-10|Eyelock, Inc.|Method and system for biometric recognition| US9117119B2|2007-09-01|2015-08-25|Eyelock, Inc.|Mobile identity platform| US9002073B2|2007-09-01|2015-04-07|Eyelock, Inc.|Mobile identity platform| WO2009029765A1|2007-09-01|2009-03-05|Global Rainmakers, Inc.|Mirror system and method for acquiring biometric data| US9036871B2|2007-09-01|2015-05-19|Eyelock, Inc.|Mobility identity platform| US8212870B2|2007-09-01|2012-07-03|Hanna Keith J|Mirror system and method for acquiring biometric data| WO2009158662A2|2008-06-26|2009-12-30|Global Rainmakers, Inc.|Method of reducing visibility of illimination while acquiring high quality imagery| US10043229B2|2011-01-26|2018-08-07|Eyelock Llc|Method for confirming the identity of an individual while shielding that individual's personal data| CN103477351B|2011-02-17|2019-06-28|眼锁有限责任公司|For the high efficiency method and system using single sensor acquisition scene image and iris image| US9124798B2|2011-05-17|2015-09-01|Eyelock Inc.|Systems and methods for illuminating an iris with visible light for biometric acquisition| EP3010392A4|2013-06-18|2017-01-25|Delta ID Inc.|Iris imaging apparatus and methods for configuring an iris imaging apparatus| US9113036B2|2013-07-17|2015-08-18|Ebay Inc.|Methods, systems, and apparatus for providing video communications| US10372982B2|2014-01-06|2019-08-06|Eyelock Llc|Methods and apparatus for repetitive iris recognition| CN105917357B|2014-02-12|2019-06-28|三星电子株式会社|Sensitive bio-identification camera with bandpass filter and variable light source| WO2015131198A1|2014-02-28|2015-09-03|Lrs Identity, Inc.|Dual iris and color camera in a mobile computing device| JP6417676B2|2014-03-06|2018-11-07|ソニー株式会社|Information processing apparatus, information processing method, eyewear terminal, and authentication system| KR102198852B1|2014-03-24|2021-01-05|삼성전자 주식회사|Iris recognition apparatus and and mobile apparatus having the same| CN104102906B|2014-07-16|2017-10-17|广东欧珀移动通信有限公司|A kind of data processing method and equipment applied to iris authentication system| KR102215077B1|2014-08-04|2021-02-10|삼성전자주식회사|Iris recognition apparatus and iris recognition method| BR112017004593A2|2014-09-12|2017-12-05|Eyelock Llc|methods and apparatus for directing a user's gaze on an iris recognition system| JP6609098B2|2014-10-30|2019-11-20|キヤノン株式会社|Display control apparatus, display control method, and computer program| CN104580854A|2014-12-23|2015-04-29|南昌欧菲光电技术有限公司|Imaging device and mobile terminal| CN107683446B|2015-03-13|2020-11-03|苹果公司|Method for automatically identifying at least one user of an eye tracking device and eye tracking device| US20160275348A1|2015-03-17|2016-09-22|Motorola Mobility Llc|Low-power iris authentication alignment| CN105100567B|2015-07-13|2019-07-16|南昌欧菲光电技术有限公司|Imaging device and mobile terminal| US9911023B2|2015-08-17|2018-03-06|Hand Held Products, Inc.|Indicia reader having a filtered multifunction image sensor| CN105117903B|2015-08-31|2020-03-24|联想有限公司|Information processing method and electronic equipment| CN105763774B|2016-02-29|2019-07-26|联想有限公司|A kind of camera module, electronic equipment and image-pickup method| KR20180022112A|2016-08-23|2018-03-06|삼성전자주식회사|Electronic device including the iris recognition sensor and operating method thereof| US10880742B2|2016-12-15|2020-12-29|Fotonation Limited|Iris recognition workflow| CN107451531B|2017-07-07|2021-09-07|Oppo广东移动通信有限公司|Iris recognition method and electronic device| US20190087519A1|2017-09-15|2019-03-21|Pearson Education, Inc.|Monitoring physical simulations within a digital credential platform| CN107968854A|2017-11-22|2018-04-27|广东欧珀移动通信有限公司|Electronic equipment| KR20190104796A|2018-03-02|2019-09-11|삼성전자주식회사|Method for generating plural information using camera to sense plural wave bandwidth and apparatus thereof| RU2726040C1|2019-12-27|2020-07-08|федеральное государственное автономное образовательное учреждение высшего образования "Санкт-Петербургский политехнический университет Петра Великого" |Method of forming an identification code of an information-protective label with a given level of uniqueness| CN111583459A|2020-04-28|2020-08-25|德施曼机电(中国)有限公司|Intelligent door lock system based on iris detection| CN111583456A|2020-04-28|2020-08-25|德施曼机电(中国)有限公司|Door lock device based on iris recognition|
法律状态:
2019-12-10| B25A| Requested transfer of rights approved|Owner name: EYELOCK LLC (US) | 2020-06-02| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-04-20| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-06-22| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 16/02/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201161443757P| true| 2011-02-17|2011-02-17| US61/443,757|2011-02-17| US201161472279P| true| 2011-04-06|2011-04-06| US61/472,279|2011-04-06| PCT/US2012/025468|WO2012112788A2|2011-02-17|2012-02-16|Efficient method and system for the acquisition of scene imagery and iris imagery using a single sensor| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|